1, then ai(j−1) ̸= ∅ and ai(j−1) 1, then a(i−1)j ̸= ∅ and a(i−1)j ≱P aij.
+Under the interpretation of A as a partial matrix, this condition means that the partial filling takes the
+shape of a Young diagram in English orientation and that moreover the columns of A are nondecreasing (a
+“semistandardness” condition).
+As noted in [Gas96], each proper α-coloring κ of G corresponds to a P-array Aκ by filling row i of Aκ
+with the elements of κ−1(i) in their unique P-increasing order. Thus, for any partition µ, [mµ]X(G,α) is the
+number of distinct P-arrays A whose nonempty positions correspond to the Young diagram of the partition
+µ, and where for each v ∈ P, the number of entries equal to v is exactly α(v). We say that such P-arrays
+have shape µ and content α, and denote the number of such P-arrays by NP (µ, α). Similarly, we write
+TP (µ, α) for the number of P-tableaux of shape µ and content α. Thus, we may rewrite Equation (3.5) as
+[sλ]XG =
+�
+π∈Sk
+sgn(π)
+�
+q1,...,qk
+�� 0
+q1
+��
+. . .
+��k − 1
+qk
+��
+�
+|α|=n−Q
+NP (τ(λ, π, q1, . . . , qk), α)
+=
+�
+q1,...,qk
+�� 0
+q1
+��
+. . .
+��k − 1
+qk
+��
+�
+|α|=n−Q
+�
+π∈Sk
+sgn(π) NP (τ(λ, π, q1, . . . , qk), α).
+As part of the proof of [Gas96, Theorem 3], Gasharov shows that for any partition λ,
+�
+π∈Sk
+sgn(π) NP (τ(λ, π, q1, . . . , qk), α) = TP (τ(λ, idSk, q1, . . . , qk), α),
+
+THE KROMATIC SYMMETRIC FUNCTION: A K-THEORETIC ANALOGUE OF XG
+11
+the number of P-tableaux whose shape is the Young diagram with row lengths {λ1 − q1, . . . , λk − qk} and
+whose content is α. Thus,
+[sλ]XG =
+�
+q1,...,qk
+Q≤n
+�� 0
+q1
+��
+. . .
+��k − 1
+qk
+��
+�
+|α|=n−Q
+TP (τ(λ, idSk, q1, . . . , qk), α),
+which is a nonnegative integer. Since this is true for every partition λ, the Kromatic symmetric function
+XG is Grothendieck-positive.
+□
+Note that the proof of Theorem 3.6 gives an (effective, but somewhat complicated) formula for the
+coefficients of symmetric Grothendieck functions sλ in the expansion of the Kromatic symmetric function
+XG for G a claw-free incomparability graph.
+It is highly suggestive that Theorem 3.6 (and Gasharov’s Schur-analogue) should have an interpretation
+and proof via the topology of Grassmannians. We would be very interested in a solution to the following.
+Problem 3.7. For each claw-free incomparability graph G, find a corresponding subvariety VG of the Grass-
+mannian such that the cohomology class of VG is represented in Sym by XG and the structure sheaf class of
+VG is represented by XG.
+4. Analogues of the Stanley–Stembridge conjecture
+The previous section shows that Schur-positivity of XG when G is the incomparability graph of a (3+ 1)-
+free poset lifts to an analogue for XG. It is natural to ask if it is similarly possible to lift the Stanley–
+Stembridge conjecture — claiming that such XG are e-positive — to the context of the Kromatic sym-
+metric function. However, it appears that the answer is “no.”
+We propose two definitions for a lift of the e-basis to the K-theoretic setting.
+On one hand, e-basis
+elements in usual symmetric function theory may be defined in terms of fillings of single-column Young
+diagrams, so we may lift this formula.
+Definition 4.1. The tableau K-elementary symmetric function eλ is given by
+en = s1n
+and
+eλ = eλ1 . . . eλℓ(λ).
+On the other hand, we may also define en = 1
+n!XKn, and lift this characterization.
+Definition 4.2. The graph K-elementary symmetric function is given by
+e′
+n = 1
+n!XKn
+and
+e′
+λ = e′
+λ1 . . . e′
+λℓ(λ).
+It is reasonable to hope (for extending the Stanley–Stembridge conjecture) that XG is positive in one of
+these K-theoretic e-bases, whenever G is a claw-free incomparability graph, or even just when G is a unit
+interval graph. However, one can compute that XP3 is not positive in either K-theoretic e-basis {eλ} or
+{e′
+λ}, dashing any such hopes.
+The terms of XP3 that are homogeneous of degree 3 must come from tableau or graph K-elementary
+symmetric functions of degree 3, and have coefficients corresponding to e-expansion of XP3. Since XP3 =
+3e3 + e21, one sees that the terms of XP3 for |λ| = 3 in the e-basis are 3e3 + e21, and in the e′-basis are
+3e′
+3 + e′
+21. However, we now encounter problems with the |λ| = 4 terms. In particular, both e21 and e′
+21 are
+supported on the monomial x2
+1x2
+2, with two distinct variables each of degree 2. However, it is easy to check
+that there is no proper set coloring of P3 using exactly 1 twice and 2 twice; thus, these monomials must be
+cancelled by eµ or e′
+µ terms with strictly negative coefficients.
+That this breakdown is so fundamental suggests that it may not be possible to reasonably generalize e-
+positivity to the Kromatic symmetric function, in stark contrast with the generalization of Schur-positivity
+given in Theorem 3.6. This suggests that the Stanley–Stembridge is not amenable to a topological interpre-
+tation along the lines of Problem 3.7.
+
+12
+LOGAN CREW, OLIVER PECHENIK, AND SOPHIE SPIRKL
+Acknowledgements
+We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC),
+[funding reference numbers RGPIN-2020-03912, RGPIN-2021-00010 and, RGPIN-2022-03093].
+Cette recherche a ´et´e financ´ee par le Conseil de recherches en sciences naturelles et en g´enie du Canada
+(CRSNG), [num´eros de r´ef´erence RGPIN-2020-03912, RGPIN-2021-00010, et RGPIN-2022-03093].
+This project was funded in part by the Government of Ontario.
+References
+[AN21]
+Alex Abreu and Antonio Nigro, Chromatic symmetric functions from the modular law, Journal of Combinatorial
+Theory, Series A 180 (2021), 105407.
+[AS22]
+Per Alexandersson and Robin Sulzgruber, A combinatorial expansion of vertical-strip LLT polynomials in the basis
+of elementary symmetric functions, Advances in Mathematics 400 (2022), 108256.
+[AWvW21] Farid Aliniaeifard, Victor Wang, and Stephanie van Willigenburg, The chromatic symmetric function of a graph
+centred at a vertex, preprint (2021), arXiv:2108.04850.
+[Bir12]
+George D Birkhoff, A determinant formula for the number of ways of coloring a map, Annals of Mathematics 14
+(1912), no. 1/4, 42–46.
+[Buc02]
+Anders Skovsted Buch, A Littlewood-Richardson rule for the K-theory of Grassmannians, Acta Mathematica 189
+(2002), no. 1, 37–78.
+[CH22]
+Soojin Cho and Jaehyun Hong, Positivity of chromatic symmetric functions associated with Hessenberg functions
+of bounce number 3, Electronic Journal of Combinatorics (2022), P2–19.
+[CMP23]
+Laura Colmenarejo, Alejandro H. Morales, and Greta Panova, Chromatic symmetric functions of Dyck paths and
+q-rook theory, European Journal of Combinatorics 107 (2023), Paper No. 103595, 36 pages.
+[CS20]
+Logan Crew and Sophie Spirkl, A deletion–contraction relation for the chromatic symmetric function, European
+Journal of Combinatorics 89 (2020), 103143.
+[Dah19]
+Samantha Dahlberg, A new formula for Stanley’s chromatic symmetric function for unit interval graphs and e-
+positivity for triangular ladder graphs, S´eminaire Lotharingien de Combinatoire 82 (2019).
+[DFvW20]
+Samantha Dahlberg, Ang`ele Foley, and Stephanie van Willigenburg, Resolving Stanley’s e-positivity of claw-
+contractible-free graphs, Journal of the European Mathematical Society (JEMS) 22 (2020), no. 8, 2673–2696.
+[Die17]
+Reinhard Diestel, Graph theory, fifth ed., Graduate Texts in Mathematics, vol. 173, Springer, Berlin, 2017.
+[DvW18]
+Samantha Dahlberg and Stephanie van Willigenburg, Lollipop and lariat symmetric functions, SIAM Journal on
+Discrete Mathematics 32 (2018), no. 2, 1029–1039.
+[DvW20]
+, Chromatic symmetric functions in noncommuting variables revisited, Advances in Applied Mathematics
+112 (2020), 101942.
+[Gas96]
+Vesselin Gasharov, Incomparability graphs of (3 + 1)-free posets are s-positive, Discrete Mathematics 157 (1996),
+no. 1-3, 193–197.
+[GS01]
+David D Gebhard and Bruce E Sagan, A chromatic symmetric function in noncommuting variables, Journal of
+Algebraic Combinatorics 13 (2001), no. 3, 227–255.
+[Gua13]
+Mathieu Guay-Paquet, A modular relation for the chromatic symmetric functions of (3 + 1)-free posets, preprint
+(2013), arXiv:1306.2400.
+[HHT19]
+Ang`ele M Hamel, Ch´ınh T Ho`ang, and Jake E Tuero, Chromatic symmetric functions and H-free graphs, Graphs
+and Combinatorics 35 (2019), no. 4, 815–825.
+[HW20]
+James Haglund and Andrew Timothy Wilson, Macdonald polynomials and chromatic quasisymmetric functions,
+Electronic Journal of Combinatorics 27 (2020), no. 3, Paper No. 3.37, 21 pages.
+[Hwa22]
+Byung-Hak Hwang, Chromatic quasisymmetric functions and noncommutative P -symmetric functions, preprint
+(2022), arXiv:2208.09857.
+[Iwa20]
+Shinsuke Iwao, Grothendieck polynomials and the boson-fermion correspondence, Algebraic Combinatorics 3 (2020),
+no. 5, 1023–1040.
+[LN14]
+Alain Lascoux and Hiroshi Naruse, Finite sum Cauchy identity for dual Grothendieck polynomials, Japan Academy.
+Proceedings. Series A. Mathematical Sciences 90 (2014), no. 7, 87–91.
+[LP07]
+Thomas Lam and Pavlo Pylyavskyy, Combinatorial Hopf algebras and K-homology of Grassmannians, International
+Mathematics Research Notices. IMRN (2007), no. 24, Art. ID rnm125, 48 pages.
+[Mac98]
+Ian G Macdonald, Symmetric functions and Hall polynomials, Oxford University Press, 1998.
+[Man01]
+Laurent Manivel, Symmetric functions, Schubert polynomials and degeneracy loci, SMF/AMS Texts and Mono-
+graphs, vol. 6, American Mathematical Society, Providence, RI and Soci´et´e Math´ematique de France, Paris, 2001,
+Translated from the 1998 French original by John R. Swallow, Cours Sp´ecialis´es, 3.
+[MPS21]
+Cara Monical, Oliver Pechenik, and Dominic Searles, Polynomials from combinatorial K-theory, Canadian Journal
+of Mathematics 73 (2021), no. 1, 29–62.
+[NS17]
+Gleb Nenashev and Boris Shapiro, “K-theoretic” analog of Postnikov-Shapiro algebra distinguishes graphs, Journal
+of Combinatorial Theory. Series A 148 (2017), 316–332.
+[PS04]
+Alexander Postnikov and Boris Shapiro, Trees, parking functions, syzygies, and deformations of monomial ideals,
+Transactions of the American Mathematical Society 356 (2004), no. 8, 3109–3142.
+
+THE KROMATIC SYMMETRIC FUNCTION: A K-THEORETIC ANALOGUE OF XG
+13
+[PY17]
+Oliver Pechenik and Alexander Yong, Genomic tableaux, Journal of Algebraic Combinatorics 45 (2017), no. 3,
+649–685.
+[SF99]
+Richard P. Stanley and S. Fomin, Enumerative combinatorics. vol. 2, volume 62 of, Cambridge Studies in Advanced
+Mathematics (1999).
+[SS93]
+Richard P. Stanley and John R. Stembridge, On immanants of Jacobi-Trudi matrices and permutations with
+restricted position, Journal of Combinatorial Theory. Series A 62 (1993), no. 2, 261–279.
+[Sta95]
+Richard P. Stanley, A symmetric function generalization of the chromatic polynomial of a graph, Advances in
+Mathematics 111 (1995), no. 1, 166–194.
+[Sta98]
+, Graph colorings and related symmetric functions: ideas and applications a description of results, interest-
+ing applications, & notable open problems, Discrete Mathematics 193 (1998), no. 1-3, 267–286.
+[SW16]
+John Shareshian and Michelle L Wachs, Chromatic quasisymmetric functions, Advances in Mathematics 295 (2016),
+497–551.
+[Tom21]
+Foster Tom, Private communication to L. Crew and S. Spirkl, 2021.
+[TWZ22]
+Vasu Tewari, Andrew Timothy Wilson, and Philip B. Zhang, Chromatic nonsymmetric polynomials of Dyck graphs
+are slide-positive, Proceedings of the American Mathematical Society 150 (2022), no. 5, 1873–1888.
+[TY09]
+Hugh Thomas and Alexander Yong, A jeu de taquin theory for increasing tableaux, with applications to K-theoretic
+Schubert calculus, Algebra & Number Theory 3 (2009), no. 2, 121–148.
+[Wes21]
+Douglas B. West, Combinatorial mathematics, Cambridge University Press, Cambridge, 2021.
+Department of Combinatorics & Optimization, University of Waterloo, Waterloo, ON, N2L 3G1, Canada.
+Email address: {lcrew, opecheni, sspirkl}@uwaterloo.ca
+
diff --git a/_tA0T4oBgHgl3EQfPf_J/content/tmp_files/load_file.txt b/_tA0T4oBgHgl3EQfPf_J/content/tmp_files/load_file.txt
new file mode 100644
index 0000000000000000000000000000000000000000..0fc0fa75c2a8f2ced9ce0fef90538948a7097ce5
--- /dev/null
+++ b/_tA0T4oBgHgl3EQfPf_J/content/tmp_files/load_file.txt
@@ -0,0 +1,622 @@
+filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf,len=621
+page_content='arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content='02177v1 [math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content='CO] 5 Jan 2023 THE KROMATIC SYMMETRIC FUNCTION: A K-THEORETIC ANALOGUE OF XG LOGAN CREW, OLIVER PECHENIK, AND SOPHIE SPIRKL Abstract.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' Schur functions are a basis of the symmetric function ring that represent Schubert cohomology classes for Grassmannians.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' Replacing the cohomology ring with K-theory yields a rich combinatorial theory of inhomogeneous deformations, where Schur functions are replaced by their K-analogues, the basis of symmetric Grothendieck functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' We introduce and initiate a theory of the Kromatic symmetric function XG, a K-theoretic analogue of the chromatic symmetric function XG of a graph G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' The Kromatic symmetric function is a generating series for graph colorings in which vertices may receive any nonempty set of distinct colors such that neighboring color sets are disjoint.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' Our main result lifts a theorem of Gasharov (1996) to this setting, showing that when G is a claw-free incomparability graph, XG is a positive sum of symmetric Grothendieck functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' This result suggests a topological interpretation of Gasharov’s theorem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' We then show that the Kromatic symmetric functions of path graphs are not positive in any of several K-analogues of the e-basis of symmetric functions, demon- strating that the Stanley–Stembridge conjecture (1993) does not have such a lift to K-theory and so is unlikely to be amenable to a topological perspective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' We also define a vertex-weighted extension of XG and show that it admits a deletion–contraction relation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' Finally, we give a K-analogue for XG of the classic monomial-basis expansion of XG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' Introduction The chromatic symmetric function XG of a graph G was introduced by R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' Stanley [Sta95] as a gen- eralization of G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' Birkhoff’s chromatic polynomial [Bir12].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' While the chromatic polynomial enumerates proper graph colorings by the number of colors used, XG also records how many times each color is used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' A recent boom of research regarding XG has focused on the Stanley–Stembridge conjecture [SS93], which proposes (in a reformulation by M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' Guay-Paquet [Gua13]) that unit interval graphs have chromatic sym- metric functions that expand positively in the e-basis of the ring Sym of symmetric functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' In the last few years, various special cases of this conjecture have been established through direct combinatorial analy- sis, including the cases of lollipop graphs [DvW18] and many claw-free graphs [HHT19].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' Another approach has been to consider various generalizations of the chromatic symmetric function and corresponding lifts of the Stanley–Stembridge conjecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' Examples of this latter approach include the chromatic quasisymmetric function and Shareshian–Wachs conjecture of [SW16] (further studied in [AN21, AS22, CH22, CMP23]), the chromatic nonsymmetric functions of J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' Haglund–A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' Wilson [HW20] (further studied in [TWZ22]), and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' Gebhard–B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' Sagan’s [GS01] chromatic symmetric function in noncommuting variables combined with notions of (e)-positivity and appendable (e)-positivity (further studied in [AWvW21, Dah19, DvW20]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' Our work provides a novel generalization of XG in the same vein.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' An important appearance of the ring of symmetric functions Sym is as the cohomology of complex Grass- mannians (parameter spaces for linear subspaces of a vector space) or more precisely for the classifying space BU.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' Here, the Schubert classes derived from a natural cell decomposition of BU are represented by the Schur function basis sλ of Sym.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' A richer perspective into the topology of BU is obtained by replacing cohomology with a generalized cohomology theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' In particular, there has been much focus on studying the associated combinatorics of the K-theory ring (see [Buc02, MPS21, PY17, TY09]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' In this context, many of the classical objects of symmetric function theory are seen to have interesting K-analogues, often resembling “superpositions” of classical objects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' For example, classical semistandard Young tableaux are replaced by set-valued tableaux (allowing multiple labels per cell), while Schur functions are replaced by Grothendieck polynomials sλ (inhomogeneous deformations of sλ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' Date: January 6, 2023.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' 2020 Mathematics Subject Classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' 05C15, 05C31, 05E05.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' Key words and phrases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' chromatic symmetric function, Grothendieck polynomial, K-theory, deletion–contraction relation, Stanley–Stembridge conjecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' 1 2 LOGAN CREW, OLIVER PECHENIK, AND SOPHIE SPIRKL Our work introduces a K-analogue of the chromatic symmetric function XG, enumerating colorings of the graph G that assign a nonempty set of distinct colors to each vertex such that adjacent vertices receive disjoint sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' While our Kromatic symmetric function XG is new, similar functions have been previously considered.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' The first such function was originally discussed by R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' Stanley [Sta98] in the context of graph analogues of symmetric functions, with connections to the real-rootedness of polynomials.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' Recently, as part of his effort to refine Schur-positivity results and the Stanley–Stembridge conjecture, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' Hwang [Hwa22] studied a similar quasisymmetric function for graphs endowed with a fixed map α : V (G) → N that dictates the size of the set of colors each vertex receives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' To connect chromatic quasisymmetric functions of vertex- weighted graphs to horizontal-strip LLT polynomials, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' Tom [Tom21] has considered a variant for fixed α with repeated colors allowed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' Our work appears to be the first to connect these ideas to the combinatorics of K-theoretic Schubert calculus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' (However, [NS17] is similar in spirit to our work, developing a K-theoretic analogue of the Postnikov–Shapiro algebra [PS04], an apparently unrelated invariant of graphs).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' In this paper, having introduced the Kromatic symmetric function, we begin to develop its combinatorial theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' We show that the Kromatic symmetric function XG for any graph G expands positively in a K- theoretic analogue (that we also introduce) of the monomial basis of Sym.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' In this expansion, the coefficients enumerate coverings of the graph by (possibly overlapping) stable sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' We further extend the definition of XG to a vertex-weighted setting, where we give a deletion–contraction relation analogous to that developed by the first and last authors [CS20] for the vertex-weighted version of XG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' Our main result is that the Kromatic symmetric function of a claw-free incomparability graph expands posi- tively in the symmetric Grothendieck basis sλ of Sym, lifting to K-theory a celebrated result of V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' Gasharov [Gas96] that such graphs have Schur-positive chromatic symmetric functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' While all known proofs of Gasharov’s theorem are representation-theoretic or purely combinatorial, the existence of our K-theoretic analogue suggests that both results likely also have an interpretation in terms of the topology of Grassmanni- ans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' Precisely, for each claw-free incomparability graph G, there should be a subvariety of the Grassmannian whose cohomology class is represented by XG and whose K-theoretic structure sheaf class is represented by XG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' It would be very interesting to have an explicit construction of such subvarieties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' On the other hand, we show that the Kromatic symmetric functions XPn of path graphs Pn generally do not expand positively in either of two K-theoretic deformations we propose for the e-basis of Sym.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' This fact suggests that the Stanley–Stembridge conjecture, if true, is not naturally interpreted in terms of the cohomology of Grassmannians and is unlikely to be amenable to such topological tools from Schubert calculus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' We hope these observations can play a similar role to [DFvW20] in limiting the range of potential avenues of attack on the Stanley–Stembridge conjecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' This paper is organized as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' In Section 2, we provide an overview of the background and notation used from symmetric function theory (Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content='1), K-theoretic Schubert calculus (Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content='2), and graph theory (Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' In Section 3, we formally introduce the Kromatic symmetric function XG and give its basic properties, including a formula for the expansion in a new K-analogue of the monomial basis of Sym and a deletion–contraction relation for a vertex-weighted generalization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' We also give our main theorem that the Kromatic symmetric functions of claw-free incomparability graphs expand positively in symmetric Grothendieck functions, lifting the main result of [Gas96].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' In Section 4, we introduce two different K-theoretic analogues of the e-basis of Sym and show that the Kromatic symmetric function XP3 of a 3-vertex path graph P3 is not positive in either analogue, casting doubt on hopes for a Schubert calculus- based approach to the Stanley–Stembridge conjecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' Background Throughout this work, N denotes the set of (strictly) positive integers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' We write [n] for the set of positive integers {1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=', n}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' If S is any set, 2S denotes the power set of all subsets of S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' Partitions and symmetric functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' In this section, we give a brief overview of necessary back- ground material necessary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' Further details can be found in the textbooks of Stanley [SF99], Manivel [Man01], and Macdonald [Mac98].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' An integer partition λ = (λ1 ≥ λ2 ≥ · · · ≥ λk) is a finite nonincreasing sequence of positive integers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' We define ℓ(λ) to be the length of the sequence λ (so above, ℓ(λ) = k).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' We define ri(λ) to be the number of THE KROMATIC SYMMETRIC FUNCTION: A K-THEORETIC ANALOGUE OF XG 3 occurrences of i as a part of λ (so, for example, r1(2, 1, 1, 1) = 3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' If ℓ(λ) � i=1 λi = n, we say that λ is a partition of n, and we write λ ⊢ n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' The Young diagram of shape λ is a set of squares called cells, left- and top-justified (that is, in “English notation”), such that the ith row from the top contains λi cells.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' For example, the Young diagram of shape (2, 2, 1) is .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' Let C(λ) denote the set of cells of the Young diagram of shape λ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' If c ∈ C(λ) is a cell of the Young diagram of shape λ, we write c↑ for the cell immediately above c (assuming it exists), c→ for the cell immediately right of c, and so on.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' We write λT for the transpose of λ, the integer partition whose Young diagram is obtained from that of λ by exchanging rows and columns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' Let SN denote the set of all permutations of the set N fixing all but finitely-many elements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' A symmetric function f ∈ C�x1, x2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' , � is a power series of bounded degree such that for each permutation σ ∈ SN, we have f(x1, x2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' ) = f(xσ(1), xσ(2), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' The set Sym ⊂ C�x1, x2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content='� of symmetric functions forms a C-vector space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' Furthermore, if Λd denotes the set of symmetric functions that are homogeneous of degree d, then each Symd is a vector space, and Sym = ∞ � d=0 Symd as graded vector spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' The dimension of Symd as a C-vector space is equal to the number of integer partitions of d, and many bases of symmetric functions are conveniently indexed by integer partitions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' Below we provide some commonly used bases that will be used in this paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' The following are bases of Sym: the monomial symmetric functions {mλ}, defined as mλ = � xλ1 i1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' x λℓ(λ) iℓ(λ) , where the sum ranges over all distinct monomials formed by choosing distinct positive integers i1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' , iℓ(λ);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' the augmented monomial symmetric functions { �mλ}, defined as �mλ = � ∞ � i=1 ri(λ)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' � mλ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' the elementary symmetric functions {eλ}, defined by en = � i1<··· 1, then ai(j−1) ̸= ∅ and ai(j−1) 1, then a(i−1)j ̸= ∅ and a(i−1)j ≱P aij.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' Under the interpretation of A as a partial matrix, this condition means that the partial filling takes the shape of a Young diagram in English orientation and that moreover the columns of A are nondecreasing (a “semistandardness” condition).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' As noted in [Gas96], each proper α-coloring κ of G corresponds to a P-array Aκ by filling row i of Aκ with the elements of κ−1(i) in their unique P-increasing order.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' Thus, for any partition µ, [mµ]X(G,α) is the number of distinct P-arrays A whose nonempty positions correspond to the Young diagram of the partition µ, and where for each v ∈ P, the number of entries equal to v is exactly α(v).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' We say that such P-arrays have shape µ and content α, and denote the number of such P-arrays by NP (µ, α).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' Similarly, we write TP (µ, α) for the number of P-tableaux of shape µ and content α.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' Thus, we may rewrite Equation (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content='5) as [sλ]XG = � π∈Sk sgn(π) � q1,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=',qk �� 0 q1 �� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' ��k − 1 qk �� � |α|=n−Q NP (τ(λ, π, q1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' , qk), α) = � q1,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=',qk �� 0 q1 �� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' ��k − 1 qk �� � |α|=n−Q � π∈Sk sgn(π) NP (τ(λ, π, q1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' , qk), α).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' As part of the proof of [Gas96, Theorem 3], Gasharov shows that for any partition λ, � π∈Sk sgn(π) NP (τ(λ, π, q1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' , qk), α) = TP (τ(λ, idSk, q1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' , qk), α), THE KROMATIC SYMMETRIC FUNCTION: A K-THEORETIC ANALOGUE OF XG 11 the number of P-tableaux whose shape is the Young diagram with row lengths {λ1 − q1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' , λk − qk} and whose content is α.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' Thus, [sλ]XG = � q1,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=',qk Q≤n �� 0 q1 �� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' ��k − 1 qk �� � |α|=n−Q TP (τ(λ, idSk, q1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' , qk), α), which is a nonnegative integer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' Since this is true for every partition λ, the Kromatic symmetric function XG is Grothendieck-positive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' □ Note that the proof of Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content='6 gives an (effective, but somewhat complicated) formula for the coefficients of symmetric Grothendieck functions sλ in the expansion of the Kromatic symmetric function XG for G a claw-free incomparability graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' It is highly suggestive that Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content='6 (and Gasharov’s Schur-analogue) should have an interpretation and proof via the topology of Grassmannians.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' We would be very interested in a solution to the following.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' Problem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' For each claw-free incomparability graph G, find a corresponding subvariety VG of the Grass- mannian such that the cohomology class of VG is represented in Sym by XG and the structure sheaf class of VG is represented by XG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' Analogues of the Stanley–Stembridge conjecture The previous section shows that Schur-positivity of XG when G is the incomparability graph of a (3+ 1)- free poset lifts to an analogue for XG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' It is natural to ask if it is similarly possible to lift the Stanley– Stembridge conjecture — claiming that such XG are e-positive — to the context of the Kromatic sym- metric function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' However, it appears that the answer is “no.” We propose two definitions for a lift of the e-basis to the K-theoretic setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' On one hand, e-basis elements in usual symmetric function theory may be defined in terms of fillings of single-column Young diagrams, so we may lift this formula.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' Definition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' The tableau K-elementary symmetric function eλ is given by en = s1n and eλ = eλ1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' eλℓ(λ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' On the other hand, we may also define en = 1 n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content='XKn, and lift this characterization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' Definition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' The graph K-elementary symmetric function is given by e′ n = 1 n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content='XKn and e′ λ = e′ λ1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' e′ λℓ(λ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' It is reasonable to hope (for extending the Stanley–Stembridge conjecture) that XG is positive in one of these K-theoretic e-bases, whenever G is a claw-free incomparability graph, or even just when G is a unit interval graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' However, one can compute that XP3 is not positive in either K-theoretic e-basis {eλ} or {e′ λ}, dashing any such hopes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' The terms of XP3 that are homogeneous of degree 3 must come from tableau or graph K-elementary symmetric functions of degree 3, and have coefficients corresponding to e-expansion of XP3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' Since XP3 = 3e3 + e21, one sees that the terms of XP3 for |λ| = 3 in the e-basis are 3e3 + e21, and in the e′-basis are 3e′ 3 + e′ 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' However, we now encounter problems with the |λ| = 4 terms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' In particular, both e21 and e′ 21 are supported on the monomial x2 1x2 2, with two distinct variables each of degree 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' However, it is easy to check that there is no proper set coloring of P3 using exactly 1 twice and 2 twice;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' thus, these monomials must be cancelled by eµ or e′ µ terms with strictly negative coefficients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' That this breakdown is so fundamental suggests that it may not be possible to reasonably generalize e- positivity to the Kromatic symmetric function, in stark contrast with the generalization of Schur-positivity given in Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' This suggests that the Stanley–Stembridge is not amenable to a topological interpre- tation along the lines of Problem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' 12 LOGAN CREW, OLIVER PECHENIK, AND SOPHIE SPIRKL Acknowledgements We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), [funding reference numbers RGPIN-2020-03912, RGPIN-2021-00010 and, RGPIN-2022-03093].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' Cette recherche a ´et´e financ´ee par le Conseil de recherches en sciences naturelles et en g´enie du Canada (CRSNG), [num´eros de r´ef´erence RGPIN-2020-03912, RGPIN-2021-00010, et RGPIN-2022-03093].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' This project was funded in part by the Government of Ontario.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' References [AN21] Alex Abreu and Antonio Nigro, Chromatic symmetric functions from the modular law, Journal of Combinatorial Theory, Series A 180 (2021), 105407.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' [AS22] Per Alexandersson and Robin Sulzgruber, A combinatorial expansion of vertical-strip LLT polynomials in the basis of elementary symmetric functions, Advances in Mathematics 400 (2022), 108256.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' [AWvW21] Farid Aliniaeifard, Victor Wang, and Stephanie van Willigenburg, The chromatic symmetric function of a graph centred at a vertex, preprint (2021), arXiv:2108.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content='04850.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' [Bir12] George D Birkhoff, A determinant formula for the number of ways of coloring a map, Annals of Mathematics 14 (1912), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' 1/4, 42–46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' [Buc02] Anders Skovsted Buch, A Littlewood-Richardson rule for the K-theory of Grassmannians, Acta Mathematica 189 (2002), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' 1, 37–78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' [CH22] Soojin Cho and Jaehyun Hong, Positivity of chromatic symmetric functions associated with Hessenberg functions of bounce number 3, Electronic Journal of Combinatorics (2022), P2–19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' [CMP23] Laura Colmenarejo, Alejandro H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' Morales, and Greta Panova, Chromatic symmetric functions of Dyck paths and q-rook theory, European Journal of Combinatorics 107 (2023), Paper No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' 103595, 36 pages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' [CS20] Logan Crew and Sophie Spirkl, A deletion–contraction relation for the chromatic symmetric function, European Journal of Combinatorics 89 (2020), 103143.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' [Dah19] Samantha Dahlberg, A new formula for Stanley’s chromatic symmetric function for unit interval graphs and e- positivity for triangular ladder graphs, S´eminaire Lotharingien de Combinatoire 82 (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' [DFvW20] Samantha Dahlberg, Ang`ele Foley, and Stephanie van Willigenburg, Resolving Stanley’s e-positivity of claw- contractible-free graphs, Journal of the European Mathematical Society (JEMS) 22 (2020), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' 8, 2673–2696.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' [Die17] Reinhard Diestel, Graph theory, fifth ed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=', Graduate Texts in Mathematics, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' 173, Springer, Berlin, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' [DvW18] Samantha Dahlberg and Stephanie van Willigenburg, Lollipop and lariat symmetric functions, SIAM Journal on Discrete Mathematics 32 (2018), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' 2, 1029–1039.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' [DvW20] , Chromatic symmetric functions in noncommuting variables revisited, Advances in Applied Mathematics 112 (2020), 101942.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' [Gas96] Vesselin Gasharov, Incomparability graphs of (3 + 1)-free posets are s-positive, Discrete Mathematics 157 (1996), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' 1-3, 193–197.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' [GS01] David D Gebhard and Bruce E Sagan, A chromatic symmetric function in noncommuting variables, Journal of Algebraic Combinatorics 13 (2001), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' 3, 227–255.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' [Gua13] Mathieu Guay-Paquet, A modular relation for the chromatic symmetric functions of (3 + 1)-free posets, preprint (2013), arXiv:1306.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content='2400.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' [HHT19] Ang`ele M Hamel, Ch´ınh T Ho`ang, and Jake E Tuero, Chromatic symmetric functions and H-free graphs, Graphs and Combinatorics 35 (2019), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' 4, 815–825.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' [HW20] James Haglund and Andrew Timothy Wilson, Macdonald polynomials and chromatic quasisymmetric functions, Electronic Journal of Combinatorics 27 (2020), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' 3, Paper No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content='37, 21 pages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' [Hwa22] Byung-Hak Hwang, Chromatic quasisymmetric functions and noncommutative P -symmetric functions, preprint (2022), arXiv:2208.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content='09857.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' [Iwa20] Shinsuke Iwao, Grothendieck polynomials and the boson-fermion correspondence, Algebraic Combinatorics 3 (2020), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' 5, 1023–1040.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' [LN14] Alain Lascoux and Hiroshi Naruse, Finite sum Cauchy identity for dual Grothendieck polynomials, Japan Academy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' Proceedings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' Series A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' Mathematical Sciences 90 (2014), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' 7, 87–91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' [LP07] Thomas Lam and Pavlo Pylyavskyy, Combinatorial Hopf algebras and K-homology of Grassmannians, International Mathematics Research Notices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' IMRN (2007), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' 24, Art.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' ID rnm125, 48 pages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' [Mac98] Ian G Macdonald, Symmetric functions and Hall polynomials, Oxford University Press, 1998.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' [Man01] Laurent Manivel, Symmetric functions, Schubert polynomials and degeneracy loci, SMF/AMS Texts and Mono- graphs, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' 6, American Mathematical Society, Providence, RI and Soci´et´e Math´ematique de France, Paris, 2001, Translated from the 1998 French original by John R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' Swallow, Cours Sp´ecialis´es, 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' [MPS21] Cara Monical, Oliver Pechenik, and Dominic Searles, Polynomials from combinatorial K-theory, Canadian Journal of Mathematics 73 (2021), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' 1, 29–62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' [NS17] Gleb Nenashev and Boris Shapiro, “K-theoretic” analog of Postnikov-Shapiro algebra distinguishes graphs, Journal of Combinatorial Theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' Series A 148 (2017), 316–332.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' [PS04] Alexander Postnikov and Boris Shapiro, Trees, parking functions, syzygies, and deformations of monomial ideals, Transactions of the American Mathematical Society 356 (2004), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' 8, 3109–3142.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' THE KROMATIC SYMMETRIC FUNCTION: A K-THEORETIC ANALOGUE OF XG 13 [PY17] Oliver Pechenik and Alexander Yong, Genomic tableaux, Journal of Algebraic Combinatorics 45 (2017), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' 3, 649–685.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' [SF99] Richard P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' Stanley and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' Fomin, Enumerative combinatorics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' 2, volume 62 of, Cambridge Studies in Advanced Mathematics (1999).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' [SS93] Richard P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' Stanley and John R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' Stembridge, On immanants of Jacobi-Trudi matrices and permutations with restricted position, Journal of Combinatorial Theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' Series A 62 (1993), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' 2, 261–279.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' [Sta95] Richard P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' Stanley, A symmetric function generalization of the chromatic polynomial of a graph, Advances in Mathematics 111 (1995), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' 1, 166–194.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' [Sta98] , Graph colorings and related symmetric functions: ideas and applications a description of results, interest- ing applications, & notable open problems, Discrete Mathematics 193 (1998), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' 1-3, 267–286.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' [SW16] John Shareshian and Michelle L Wachs, Chromatic quasisymmetric functions, Advances in Mathematics 295 (2016), 497–551.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' [Tom21] Foster Tom, Private communication to L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' Crew and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' Spirkl, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' [TWZ22] Vasu Tewari, Andrew Timothy Wilson, and Philip B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' Zhang, Chromatic nonsymmetric polynomials of Dyck graphs are slide-positive, Proceedings of the American Mathematical Society 150 (2022), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' 5, 1873–1888.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' [TY09] Hugh Thomas and Alexander Yong, A jeu de taquin theory for increasing tableaux, with applications to K-theoretic Schubert calculus, Algebra & Number Theory 3 (2009), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' 2, 121–148.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' [Wes21] Douglas B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' West, Combinatorial mathematics, Cambridge University Press, Cambridge, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' Department of Combinatorics & Optimization, University of Waterloo, Waterloo, ON, N2L 3G1, Canada.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content=' Email address: {lcrew, opecheni, sspirkl}@uwaterloo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
+page_content='ca' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'}
diff --git a/adFAT4oBgHgl3EQf4h70/content/tmp_files/2301.08727v1.pdf.txt b/adFAT4oBgHgl3EQf4h70/content/tmp_files/2301.08727v1.pdf.txt
new file mode 100644
index 0000000000000000000000000000000000000000..ffb5baf08d6c2bbdaa2abd11cc41127b002f41b6
--- /dev/null
+++ b/adFAT4oBgHgl3EQf4h70/content/tmp_files/2301.08727v1.pdf.txt
@@ -0,0 +1,3248 @@
+Neural Architecture Search: Insights from 1000 Papers
+Colin White
+colin@abacus.ai
+Abacus.AI
+San Francisco, CA 94105, USA
+Mahmoud Safari
+safarim@cs.uni-freiburg.de
+University of Freiburg
+Freiburg im Breisgau, 79110, Germany
+Rhea Sukthanker
+sukthank@cs.uni-freiburg.de
+University of Freiburg
+Freiburg im Breisgau, 79110, Germany
+Binxin Ru
+robinru@sailyond.com
+Sailyond Technology & Research Institute of Tsinghua University
+Shenzhen, 518071, China
+Thomas Elsken
+thomas.elsken@de.bosch.com
+Bosch Center for Artificial Intelligence
+Renningen, 71272, Germany
+Arber Zela
+zelaa@cs.uni-freiburg.de
+University of Freiburg
+Freiburg im Breisgau, 79110, Germany
+Debadeepta Dey
+dedey@microsoft.com
+Microsoft Research
+Redmond, WA 98052, USA
+Frank Hutter
+fh@cs.uni-freiburg.de
+University of Freiburg & Bosch Center for Artificial Intelligence
+Freiburg im Breisgau, 79110, Germany
+Abstract
+In the past decade, advances in deep learning have resulted in breakthroughs in a variety
+of areas, including computer vision, natural language understanding, speech recognition,
+and reinforcement learning. Specialized, high-performing neural architectures are crucial to
+the success of deep learning in these areas. Neural architecture search (NAS), the process
+of automating the design of neural architectures for a given task, is an inevitable next
+step in automating machine learning and has already outpaced the best human-designed
+architectures on many tasks. In the past few years, research in NAS has been progressing
+rapidly, with over 1000 papers released since 2020. In this survey, we provide an organized
+and comprehensive guide to neural architecture search.
+We give a taxonomy of search
+spaces, algorithms, and speedup techniques, and we discuss resources such as benchmarks,
+best practices, other surveys, and open-source libraries.
+Keywords:
+neural architecture search, automated machine learning, deep learning
+©2022 Colin White, Mahmoud Safari, Rhea Sukthanker, Binxin Ru, Thomas Elsken, Arber Zela, Debadeepta Dey
+and Frank Hutter.
+License: CC-BY 4.0, see https://creativecommons.org/licenses/by/4.0/.
+arXiv:2301.08727v1 [cs.LG] 20 Jan 2023
+
+White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter
+1. Introduction
+In the past decade, deep learning has become the dominant paradigm in machine learning for
+a variety of applications and has been used in a number of breakthroughs across computer
+vision (He et al., 2016a; Huang et al., 2017; Krizhevsky et al., 2012; Szegedy et al., 2017),
+natural language understanding (Bahdanau et al., 2015; Hochreiter and Schmidhuber, 1997;
+Vaswani et al., 2017), speech recognition (Chan et al., 2016; Chorowski et al., 2015; Hannun
+et al., 2014), and reinforcement learning (Mnih et al., 2015; Silver et al., 2016); it is also
+becoming a very powerful approach for the analysis of tabular data (Hollmann et al., 2022;
+Kadra et al., 2021; Somepalli et al., 2021). While many factors played into the rise of deep
+learning approaches, including deep learning’s ability to automate feature extraction, as
+well as an increase in data and the larger availability of computational resources, the design
+of high-performing neural architectures has been crucial to the success of deep learning.
+Recently, just as manual feature engineering was replaced by automated feature learning
+via deep learning, it is getting more and more common to automate the time-consuming
+architecture design step via neural architecture search. Neural architecture search (NAS),
+the process of automating the design of neural architectures for a given task, has already
+outpaced the best human-designed architectures on many tasks (Chen et al., 2018; Du et al.,
+2020; Ghiasi et al., 2019; So et al., 2019; Zoph et al., 2018), notably ImageNet (Hu et al.,
+2019; Liu et al., 2018a; Real et al., 2019; Zoph et al., 2018), as well as diverse and less-studied
+datasets (Shen et al., 2022), and in memory- or latency-constrained settings (Benmeziane
+et al., 2021). Indeed, in the past few years, research in NAS has been progressing rapidly.
+Although several surveys have been written for NAS and related areas in the past (Elsken
+et al., 2019b; Wistuba et al., 2019, also see Section 10.2), over 1000 new NAS papers
+have been released in the last two years, warranting the need for an up-to-date survey
+on over-arching advances, which we aim to provide with this work.
+1.1 A Brief History of NAS and Relation to Other Fields
+NAS emerged as a subfield of automated machine learning (AutoML) (Hutter et al., 2019),
+the process of automating all steps in the machine learning pipeline, from data cleaning,
+to feature engineering and selection, to hyperparameter and architecture search. NAS has
+a large overlap with hyperparameter optimization (HPO) (Feurer and Hutter, 2019), which
+refers to the automated optimization of hyperparameters of the machine learning model.
+NAS is sometimes referred to as a subset of HPO (Li and Talwalkar, 2019), since NAS can
+be expressed as optimizing only the hyperparameters that correspond to the architecture,
+a subset of the entire set of model hyperparameters. However, the techniques for HPO vs.
+NAS are often substantially different.
+A typical HPO problem optimizes a mix of continuous and categorical hyperparameters,
+such as learning rate, dropout rate, batch size, momentum, activation function, normaliza-
+tion strategy, and so on. Typically, the domains of most hyperparameters are independent
+(that is, the set of possible values for each hyperparameter is not affected by the possible
+values of other hyperparameters). Therefore, the typical search space of an HPO problem
+is the product space of a mix of continuous and categorical dimensions. By contrast, NAS
+is specifically focused on optimizing the topology of the architecture, which can be much
+more complex. The topology is typically represented by a directed acyclic graph (DAG), in
+2
+
+Neural Architecture Search: Insights from 1000 Papers
+which the nodes or edges are labeled by neural network operations. Therefore, the search
+space of a NAS problem is typically discrete1 and can be represented directly as a graph,
+or as a hierarchical structure of conditional hyperparameters.
+Although standard HPO algorithms can sometimes be adapted for NAS (Izquierdo et al.,
+2021; Klein et al., 2020; Li et al., 2020c; Mendoza et al., 2016; Zela et al., 2018; Zimmer
+et al., 2021), it is often much more efficient and effective to use NAS techniques which are
+tailored to optimize the intricate space of neural architectures. Furthermore, most modern
+NAS techniques go beyond black-box optimization algorithms by exploiting details specific
+to NAS, such as sharing weights among similar neural architectures to avoid training each
+of them from scratch.
+2015
+2016
+2017
+2018
+2019
+2020
+2021
+2022
+Year
+0
+100
+200
+300
+400
+500
+600
+700
+Num. papers
+Figure 1: Number of NAS papers by year.
+Historically, NAS has been around since
+at least the late 1980s (Angeline et al., 1994;
+Kitano, 1990; Miller et al., 1989; Teno-
+rio and Lee, 1988) but it did not gain
+widespread attention until the popular pa-
+per, NAS with Reinforcement Learning, by
+Zoph and Le (2017). There has since been a
+huge interest in NAS, with over 1000 papers
+released in the last two years (see Figure 1).
+By now,
+many different approaches,
+such as reinforcement learning, evolution-
+ary algorithms, Bayesian optimization, and
+NAS-specific techniques based on weight
+sharing have been explored.
+Perhaps the
+most popular recent approaches are one-
+shot techniques (Bender et al., 2018; Liu et al., 2019c), which often substantially speed
+up the search process compared to black-box optimization techniques. In recent years, a
+large body of follow-up work has focused on making one-shot methods more robust and reli-
+able (Wang et al., 2021; Zela et al., 2020a). In parallel, there has been a large push to make
+NAS research more reproducible and scientific, starting with the release of NAS-Bench-101
+(Ying et al., 2019), the first tabular benchmark for NAS. Furthermore, while the early days
+of NAS has mostly focused on image classification problems such as CIFAR-10 and Ima-
+geNet, the field has now expanded to many other domains, such as object detection (Ghiasi
+et al., 2019; Xu et al., 2019a), semantic segmentation (Chen et al., 2018; Liu et al., 2019a),
+speech recognition (Mehrotra et al., 2021), partial differential equation solving (Roberts
+et al., 2021; Shen et al., 2022; Tu et al., 2022a), protein folding (Roberts et al., 2021; Shen
+et al., 2022), and weather prediction (Tu et al., 2022b), and the field has seen a renewed
+interest in natural language processing (Chitty-Venkata et al., 2022; Javaheripi et al., 2022).
+1.2 Background and Definitions
+Prior NAS surveys (e.g. Elsken et al., 2019b; Wistuba et al., 2019) have referred to three
+dimensions of NAS: search space, search strategy, and performance evaluation strategy (see
+1. Notably, some NAS techniques such as DARTS (Liu et al., 2019c) relax the domain to be continuous
+during the search, but then the hyperparameters are discretized in order to return the final architecture.
+3
+
+White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter
+Search Strategy
+Performance
+Estimation
+Strategy
+Search Space
+Architecture
+a
+Performance
+estimate of a
+One-shot methods: jointly learning
+architecture hyperparameters and weights
+Architecture
+encoding
+method
+Figure 2: Overview of neural architecture search (Elsken et al., 2019b; Weng, 2020). A
+search strategy iteratively selects architectures (typically by using an architecture
+encoding method) from a predefined search space A. The architectures are passed
+to a performance estimation strategy, which returns the performance estimate to
+the search strategy. For one-shot methods, the search strategy and performance
+estimation strategy are inherently coupled.
+Figure 2). We define each term below, as this is a useful disambiguation for understanding
+many NAS methods. However, it is worth noting that the trichotomy cannot be applied to
+the large sub-area of one-shot methods, because for these methods, the search strategy is
+coupled with the performance evaluation strategy (Xie et al., 2021).
+A search space is the set of all architectures that the NAS algorithm is allowed to select.
+Common NAS search spaces range in size from a few thousand to over 1020. While the
+search space in principle can be extremely general, incorporating domain knowledge when
+designing the search space can simplify the search.
+However, adding too much domain
+knowledge introduces human bias, which reduces the chances of a NAS method finding
+truly novel architectures. Search spaces are discussed in more detail in Section 2.
+A search strategy is an optimization technique used to find a high-performing archi-
+tecture in the search space. There are generally two main categories of search strategies:
+black-box optimization based techniques (including multi-fidelity techniques) and one-shot
+techniques. However, there are some NAS methods for which both or neither category ap-
+plies. Black-box optimization based techniques, such as reinforcement learning, Bayesian
+optimization, and evolutionary search, are surveyed in Section 3. One-shot methods, in-
+cluding supernet- and hypernet-based methods, are surveyed in Section 4.
+A performance estimation strategy is any method used to quickly predict the perfor-
+mance of neural architectures in order to avoid fully training the architecture. For example,
+while we can run a discrete search strategy by fully training and evaluating architectures
+chosen throughout the search, using a performance estimation strategy such as learning
+curve extrapolation can greatly increase the speed of the search. Performance estimation
+strategies, and more generally speedup techniques, are surveyed in Section 5.
+The most basic definition of NAS is as follows. Given a search space A , a dataset D, a
+training pipeline P, and a time or computation budget t, the goal is to find an architecture
+a ∈ A within budget t which has the highest possible validation accuracy when trained
+using dataset D and training pipeline P. A common method of approaching NAS is to
+4
+
+EANeural Architecture Search: Insights from 1000 Papers
+approximately solve the following expression within time t:
+min
+a∈A
+Lval (w∗(a), a)
+s.t.
+w∗(a) = argminw Ltrain (w, a) .
+Here, Lval and Ltrain denote the validation loss and training loss, respectively. While this is
+the core definition of NAS, other variants will be discussed throughout this survey. For ex-
+ample, we may want to return an architecture with constraints on the number of parameters
+(Section 6.2), or we may use meta-learning (Section 5.3) to improve performance.
+Throughout the rest of this article, we provide a comprehensive guide to the latest NAS
+techniques and resources. Sections 2 to 5 are devoted to NAS techniques, surveying search
+spaces, black-box optimization techniques, one-shot techniques, and speedup techniques,
+respectively. Sections 6 to 10 cover extensions, applications, and resources, and Section 11
+concludes by discussing promising future directions.
+2. Search Spaces
+The search space is perhaps the most essential ingredient of NAS. While other areas of
+AutoML overlap with NAS in terms of the optimization methods used, the architectural
+search space is unique to NAS. Furthermore, the search space is often the first step when
+setting up NAS. The majority of popular search spaces are task-specific and were heavily
+inspired by the state-of-the-art manual architectures in their respective application domains.
+For example, NAS-Bench-101, a popular image classification search space (Ying et al., 2019)
+was inspired by ResNet (He et al., 2016a) and Inception (Szegedy et al., 2017).
+In fact, the design of the search space represents an important trade-off between human
+bias and efficiency of search: if the size of the search space is small and includes many hand-
+picked decisions, then NAS algorithms will have an easier time finding a high-performing
+architecture. On the other hand, if the search space is large with more primitive building
+blocks, a NAS algorithm will need to run longer, but there is the possibility of discovering
+truly novel architectures (Real et al., 2020).
+In this section, we survey the main categories of search spaces for NAS as summarized
+in Table 1. We start in Section 2.1 by defining general terminology. In Sections 2.2 and 2.3,
+we discuss the relatively simple macro and chain-structured search spaces, respectively. In
+Section 2.4, we describe the most popular type of search space: the cell-based search space.
+In Section 2.5, we describe hierarchical search spaces. Finally, in Section 2.6, we discuss
+architecture encodings, an important design decision for NAS algorithms that is inherently
+tied to the choice of search space.
+2.1 Terminology
+The search space terminologies differ across the literature, depending on the type of search
+space. For clarity, we define the main terms here and in Appendix Figure 9.
+• Operation/primitive denotes the atomic unit of the search space. For nearly all popular
+search spaces, this is a triplet of a fixed activation, operation, and fixed normalization,
+such as ReLU-conv 1x1-batchnorm, where the ReLU and BatchNorm are fixed, and the
+middle operation is a choice among several different operations.
+5
+
+White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter
+Search Spaces
+Structure
+Searchable hyperparameters
+Levels of
+Topology
+Macro search space
+e.g. NASBOT (Kandasamy et al., 2018),
+EfficientNet (Tan and Le, 2019)
+DAG
+Operation types, DAG topology,
+macro hyperparameters
+1
+Chain-structured search space
+e.g. MobileNetV2 (Sandler et al., 2018)
+Chain
+Operation types, macro hyperparameters
+1
+Cell-based search space
+e.g. DARTS (Liu et al., 2019c)
+Duplicated cells
+Operation type, cell topology
+1
+Hierarchical search space
+e.g. Hier. Repr. (Liu et al., 2018b),
+Auto-DeepLab (Liu et al., 2019b)
+Varied
+Operation type, cell/DAG topology,
+macro hyperparameters
+> 1
+Table 1: Summary of the types of NAS search spaces.
+• Layer is often used in chain-structured or macro search spaces to denote the same thing
+as an operation or primitive. However, it sometimes refers to well-known combinations
+of operations, such as the inverted bottleneck residual (Cai et al., 2019; Sandler
+et al., 2018; Tan and Le, 2019; Tan et al., 2019).
+• Block/Module is sometimes used to denote a sequential stack of layers following the
+notation used in most chain-structured and macro search spaces (Cai et al., 2020; Tan
+and Le, 2019; Tan et al., 2019).
+• Cell is used to denote a directed acyclic graph of operations in cell-based search spaces.
+The maximum number of operations in a cell is often fixed.
+• Motif is used to denote a sub-pattern formed from multiple operations in an architecture.
+Some literature refers to a cell as a higher-level motif and a smaller set of operations as
+a base-level motif.
+2.2 Macro Search Spaces
+In the NAS literature, macro search spaces may refer to one of two types. First, they may
+refer to search spaces which encode the entire architecture in one level (as opposed to cell-
+based or hierarchical search spaces), which were popular in 2017 and 2018. Second, they
+may refer to search spaces which focus only on macro-level hyperparameters.
+For the former, an entire architecture is represented as a single directed acyclic graph
+(Baker et al., 2017; Kandasamy et al., 2018; Real et al., 2017; Zoph and Le, 2017). These
+search spaces typically have a choice of operation at each node in the graph, as well as the
+choice of DAG topology. For example, the NASBOT CNN search space (Kandasamy et al.,
+2018) consists of choices of different convolution, pooling, and fully connected layers, with
+any DAG topology, with depth of at most 25.
+The second type of macro search spaces (Dong et al., 2021b; Duan et al., 2021; Tan and
+Le, 2019), focus on the variation of macro-level hyperparameters, such as where and how
+much to downsample the spatial resolution throughout the architecture, while keeping the
+6
+
+Neural Architecture Search: Insights from 1000 Papers
+architecture topology and operations fixed.2 For example, Tan and Le (2019) propose a
+CNN search space by varying the network depth, width, and input feature resolution.
+Compared to other search spaces, macro search spaces have high representation power:
+their flexible structure allows the possibility of discovering novel architectures. However,
+their main downside is that they are very slow to search. In the next two sections, we
+discuss types of search spaces which have more rigidity, making them faster to search.
+2.3 Chain-Structured Search Spaces
+Chain-structured search spaces, as the name suggests, have a simple architecture topology:
+a sequential chain of operation layers. They often take state-of-the-art manual designs, such
+as ResNet (He et al., 2016b) or MobileNets (Howard et al., 2017), as the backbone.
+There are several chain-structured search spaces based on convolutional networks. Prox-
+ylessNAS (Cai et al., 2019) starts with the MobileNetV2 (Sandler et al., 2018) architecture
+and searches over the kernel sizes and expansion ratios in the inverted bottleneck residual
+layers. XD (Roberts et al., 2021) and DASH (Shen et al., 2022) start with a LeNet (LeCun
+et al., 1999), ResNet (He et al., 2016a), or WideResNet (Zagoruyko and Komodakis, 2016),
+and search over an expressive generalization of convolutions based on Kaleidoscope matrices
+(Dao et al., 2020), or kernel sizes and dilations, respectively.
+Chain-structured search spaces are also popular in transformer-based search spaces.
+For example, the search space from Lightweight Transformer Search (LTS) (Javaheripi
+et al., 2022) consists of a chain-structured configuration of the popular GPT family of
+architectures (Brown et al., 2020; Radford et al., 2019) for autoregressive language modeling,
+with searchable choices for the number of layers, model dimension, adaptive embedding
+dimension, dimension of the feedforward neural network in a transformer layer, and number
+of heads in each transformer layer. The search spaces from NAS-BERT (Xu et al., 2021a)
+and MAGIC (Xu et al., 2022) both consist of a chain-structured search space over the BERT
+architecture (Devlin et al., 2019) with up to 26 operation choices consisting of variants of
+multi-head attention, feedforward layers, and convolutions with different kernel sizes.
+Chain-structured search spaces are conceptually simple, making them easy to design
+and implement. They also often contain strong architectures that can be found relatively
+quickly. Their main downside is that, due to the simple architecture topology, there is a
+comparatively lower chance of discovering a truly novel architecture.
+2.4 Cell-based Search Spaces
+The cell-based search space is perhaps the most popular type of search space in NAS. It is
+inspired by the fact that state-of-the-art human-designed CNNs often consist of repeated
+patterns, for example, residual blocks in ResNets (Zoph et al., 2018).
+Thus, instead of
+searching for the entire network architecture from scratch, Zoph et al. (2018) proposed to
+only search over relatively small cells, and stack the cells several times in sequence to form
+the overall architecture. Formally, the searchable cells make up the micro structure of the
+search space, while the outer skeleton (the macro structure) is fixed.
+2. Strictly speaking, since these search spaces have a fixed architecture topology, they may also be called
+hyperparameter tuning search spaces instead of NAS search spaces.
+7
+
+White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter
+NASNet Cell
+(0perations on Nodes)
+Concatenate
+hi
+hi-1
+add
+Op
+Op
+add
+Op
+Op
+add
+Op
+Op
+…
+hi+1
+DARTS Cell
+(0perations on Edges)
+hi
+hi-1
+hi+1
+0
+1
+2
+3
+Operation
+candidates
+Architecture
+Input
+Normal Cell
+Output
+Reduction Cell
+Normal Cell
+Reduction Cell
+Normal Cell
+x N
+x N
+x N
+Figure 3: Illustration of cell-based search spaces. The outer skeleton across cells (left) is
+fixed, while the cells are searchable. NASNet assigns operations to nodes (middle)
+while DARTS assigns operations to edges (right).
+The first modern cell-based search space, NASNet, was proposed by Zoph et al. (2018).
+It comprises of two types of cells: the normal cell and the reduction cell. Both types have
+the same structure, but the initial operations in the reduction cell have a stride of two to
+halve the input spatial resolution. Each NASNet cell can be represented as a DAG with
+seventeen non-input nodes (see Figure 3 (middle)). The nodes are arranged in triples of
+two operation nodes (such as convolution and pooling operations) and a combination node
+(such as addition or concatenation). The final NASNet architecture is formed by stacking
+multiple normal and reduction cells in sequence (see Figure 3 (left)). Overall, there are 1035
+unique architectures in the NASNet search space.
+Since the NASNet search space, many other cell search spaces have been proposed, all
+of which share a high-level similarity to NASNet, with the main differences being the fixed
+macro structure, the layout and constraints in the cells, and the choices of operations within
+the cells. Two of the most popular cell-based search spaces are NAS-Bench-101 (Ying et al.,
+2019) and the DARTS search space (Liu et al., 2019c). NAS-Bench-101 is the first tabular
+benchmark for NAS (discussed in Section 8), and its cells consist of seven nodes, each with
+three choices of operations; it contains 423 624 unique architectures. The DARTS search
+space differs more fundamentally: while it also has two searchable cells, the DARTS cells
+have operation choices on the edges of the graph rather than on the nodes. In the DARTS
+cell, the nodes represent latent representations and the edges are operations, whereas in the
+NASNet cell, the latent representations are on the edges and the nodes are operations. The
+DARTS cells (see Figure 3 (right)) contain eight edges, each of which have eight choices of
+operations. Overall, the DARTS space contains a total of 1018 unique architectures.
+8
+
+Neural Architecture Search: Insights from 1000 Papers
+Besides image classification, similar cell designs have also been adopted for language
+models. For example, NAS-Bench-ASR (Mehrotra et al., 2021) provides a search space of
+convolutional speech model cells for automatic speech recognition, and there are several
+LSTM-based search spaces (Klyuchnikov et al., 2022; Liu et al., 2019c; Pham et al., 2018).
+The cell-based design significantly reduces the complexity of search spaces, while often
+resulting in a high-performing final architecture. This has led to the cell-based search spaces
+being the most popular type of search space in recent years. Furthermore, by detaching the
+depth of an architecture from the search, the cell-based structure is transferable: the optimal
+cells learned on a small dataset (e.g., CIFAR-10) typically transfer well to a large dataset
+(e.g., ImageNet) by increasing the number of cells and filters in the overall architecture (Liu
+et al., 2019c; Zoph et al., 2018).
+Despite their popularity, cell-based search spaces face some criticisms. First, while the
+DARTS search space contains a seemingly large number of 1018 architectures, the variance
+in the performance of DARTS architectures is rather small (Wan et al., 2022b; Yang et al.,
+2020). This small variance may contribute to the fact that sophisticated search strategies
+can only give marginal gains over the average performance of randomly sampled archi-
+tectures (Yang et al., 2020). Moreover, there are many ad-hoc design choices and fixed
+hyperparameters that come with cell-based search spaces whose impact is unclear (Wan
+et al., 2022b), such as the separation of normal and reduction cells, number of nodes, and
+set of operations. Finally, although limiting the search to a cell significantly reduces the
+search complexity, this practice reduces the expressiveness of the NAS search space, making
+it difficult to find highly novel architectures with cell search spaces. In light of this, some
+recent work advocates for searching for macro connections among cells in addition to the
+micro cell structure. We discuss this in more detail in the next section.
+2.5 Hierarchical Search Spaces
+Up to this point, all search spaces described have had a flat representation, in which an
+architecture is built by defining its hyperparameters, topology, and operation primitives in
+a single design level. Specifically, only one level of topology is searched, whether at the cell
+level or architecture level. On the other hand, hierarchical search spaces involve designing
+motifs at different levels, where each higher-level motif is often represented as a DAG of
+lower-level motifs (Chrostoforidis et al., 2021; Liu et al., 2018b; Ru et al., 2020b).
+A simple class of hierarchical search spaces has two searchable levels by adding macro-
+level architecture hyperparameters to cell or chain-structured search spaces. For example,
+the MnasNet search space (Tan et al., 2019) uses MobileNetV2 as the backbone. Liu et al.
+(2019b) designed a two-level search space for semantic image segmentation, and follow-up
+work extended it to image denoising (Zhang et al., 2020a) and stereo matching (Kumari and
+Kaur, 2016). Finally, Chen et al. (2021a) propose a two-level transformer-based search space
+for vision tasks inspired by ViT (Dosovitskiy et al., 2021) and DeiT (Touvron et al., 2021).
+The search space consists of a number of sequential blocks which can be a combination of
+local (convolution) or global (self-attention) layers.
+Beyond two levels, Liu et al. (2018b) and Wu et al. (2021) propose hierarchies of three
+levels. Liu et al. (2018b) propose a three-level hierarachy, where each level is a graph made
+up of components from the previous level (see Figure 4). Wu et al. (2021) propose a different
+9
+
+White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter
+three-level hierarchy, consisting of kernel hyperparameters, cell-based hyperparameters, and
+macro hyperparameters. The former design is extended beyond three levels in two follow-up
+works: Ru et al. (2020b) proposed a hierarchical design of four levels, controlled by a set
+of hyperparameters corresponding to a random graph generator, and Chrostoforidis et al.
+(2021) introduced a recursive building process to permit a varying number of hierarchical
+levels as well as a flexible topology among top-level motifs.
+Hierarchical Search Space
+0
+1
+3
+2
+1
+2
+1
+2
+3x3
+convolution
+0
+1
+3
+2
+0
+1
+3
+2
+Level 3 Motif
+Level 2 Motif
+Level 1
+Operation Primitives
+Level 3 Motif Graph Unrolled
+Figure 4: Illustration of hierarchical representation proposed
+in Liu et al. (2018b). Level 1 of the hierarchy con-
+sists of choices of operation primitives. Level 2 con-
+sists of selecting the topology across small sets of
+operation primitives. Level 3 consists of selecting
+the topology across the constructions from level 2.
+There are multiple ben-
+efits to using hierarchical
+search spaces.
+First, hier-
+archical search spaces tend
+to be more expressive. Most
+chain-structured, cell-based,
+and macro search spaces can
+be seen as a hierarchical
+search space with a single
+searchable level, but having
+two or more levels allows
+us to search over more di-
+verse and complex architec-
+ture designs.
+Furthermore,
+a hierarchical representation
+of a large architecture is
+an effective way to reduce
+the search complexity, which
+can lead to better search effi-
+ciency (Chrostoforidis et al.,
+2021; Liu et al., 2018b; Ru
+et al., 2020b). On the other hand, hierarchical search spaces can be more challenging to
+implement and search through.
+2.6 Architecture Encodings
+Throughout this section, we have discussed a wide variety of NAS search spaces. As a
+segue into the next two sections focusing on search strategies, we note that many NAS
+algorithms and subroutines need to have a succinct representation of each architecture, or
+encoding, in order to perform operations such as mutating an architecture, quantifying the
+similarity between two architectures, or predicting the test performance of an architecture.
+This makes architecture encodings important for several areas of NAS, including discrete
+NAS algorithms (Section 3) and performance prediction (Section 5.1).
+In most search spaces, the architecture can be represented compactly as a directed acyclic
+graph (DAG), where each node or edge represents an operation. For example, architectures
+in cell-based search spaces and chain-structured search spaces can be represented in this
+way. However, hierarchical search spaces cannot be represented fully using a DAG, and
+often need a conditionally-structured encoding, where the number of levels of conditional
+hyperparameters correspond to the number of levels of the hierarchy.
+10
+
+Neural Architecture Search: Insights from 1000 Papers
+For cell-based search spaces, one of the most commonly-used encodings is the adjacency
+matrix along with a list of operations, of the searchable cell(s) (Ying et al., 2019; Zoph and
+Le, 2017). In order to have better generalizablility, Ning et al. (2020) proposed a graph-
+based encoding scheme and White et al. (2021a) proposed a path-based encoding scheme,
+both of which model the flow of propagating information in the network. Finally, another
+type of encoding for all search spaces is a learned encoding using unsupervised pre-training.
+In this technique, before we run NAS, we use a set of untrained architectures to learn an
+architecture encoding, for example, by using an autoencoder (Li et al., 2020b; Lukasik et al.,
+2021, 2022; Yan et al., 2020; Zhang et al., 2019) or a transformer (Yan et al., 2021a).
+When choosing an architecture encoding, scalability and generalizability are important
+traits. Recent work has shown that different NAS subroutines, such as sampling a random
+architecture, perturbing an architecture, or training a surrogate model, may each perform
+best with different encodings (White et al., 2020). Furthermore, even small changes to the
+architecture encoding scheme can have significant effects on the performance of NAS (White
+et al., 2020; Ying et al., 2019).
+3. Black-Box Optimization Techniques
+Now that we have covered search spaces, we move to perhaps the most widely-studied com-
+ponent of NAS: the search strategy. This is what we run to find an optimal architecture
+from the search space. Search strategies generally fall into two categories: black-box op-
+timization techniques and one-shot techniques. However, some methods that we discuss
+include characteristics of both, or neither, of these categories. We first discuss black-box
+optimization techniques in this section, followed by one-shot techniques in Section 4.
+For black-box optimization, we discuss baselines (Section 3.1), reinforcement learning
+(Section 3.2), evolution (Section 3.3), Bayesian optimization (Section 3.4), and Monte-Carlo
+tree search (Section 3.5). Black-box optimization techniques are widely used and studied
+today, due to their strong performance and ease of use. In general, black-box optimization
+techniques tend to use more computational resources than one-shot techniques, due to
+training many architectures independently (without sharing weights across architectures like
+one-shot techniques). However, they also have many advantages over one-shot techniques,
+such as robustness (and the lack of catastrophic failure modes), simpler optimization of non-
+differentiable objectives, simpler parallelism, joint optimization with other hyperparameters,
+and easier adaptation to, e.g., new problems, datasets or search spaces. They are also often
+conceptually simpler, making them easier to implement and use.
+3.1 Baselines
+One of the simplest possible baselines for NAS is random search: architectures are selected
+randomly from the search space and then fully trained. In the end, the architecture with
+the best validation accuracy is outputted. Despite its na¨ıvet´e, multiple papers have shown
+that random search performs surprisingly well (Chen et al., 2018; Li and Talwalkar, 2019;
+Sciuto et al., 2020; Yang et al., 2020). This is especially true for highly engineered search
+spaces with a high fraction of strong architectures, since random search with a budget
+of k evaluations will, in expectation, find architectures in the top 100/k% of the search
+space. However, other works show that random search does not perform well on large,
+11
+
+White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter
+Algorithm 1 General Reinforcement Learning NAS Algorithm
+Input: Search space A, number of iterations T.
+Randomly initialize weights θ of the controller architecture.
+for t = 1, . . . , T do
+Train architecture a ∼ π(a; θ), randomly sampled from the controller policy π(a; θ).
+Update controller parameters θ by performing a gradient update ∇θEa∼π(a;θ)[Lval(a)].
+end for
+Output: Architecture selected from the trained policy π(a; θ∗)
+diverse search spaces (Bender et al., 2020; Real et al., 2020). Still, random search is highly
+recommended as a baseline comparison for new NAS algorithms (Lindauer and Hutter, 2020;
+Yang et al., 2020), and can be made highly competitive by incorporating weight sharing
+(Li and Talwalkar, 2019), zero-cost proxies (Abdelfattah et al., 2021), or learning curve
+extrapolation (Yan et al., 2021b). Multiple papers (Sciuto et al., 2020; Yang et al., 2020)
+have also proposed a related, simpler baseline: random sampling, the average performance
+of architectures across the entire search space.
+In addition to random search, recent papers showed that local search is a strong baseline
+for NAS on both small (Ottelander et al., 2021; White et al., 2021b) and large (Siems et al.,
+2020) search spaces. This is true even for the simplest form of local search: iteratively
+train and evaluate all of the neighbors of the best architecture found so far, where the
+neighborhood is typically defined as all architectures which differ by one operation or edge.
+Local search can be sped up substantially by using network morphisms to warm-start the
+optimization of neighboring architectures (Elsken et al., 2017).
+3.2 Reinforcement Learning
+Reinforcement learning (RL) was very prominent in the early days of modern NAS. Notably,
+the seminal work by Zoph and Le (2017) used RL on 800 GPUs for two weeks to obtain
+competitive performance on CIFAR-10 and Penn Treebank; this finding received substantial
+media attention and started the modern resurgence of NAS. This was followed up by several
+more reinforcement learning approaches (Pham et al., 2018; Zoph et al., 2018).
+Most reinforcement learning approaches model the architectures as a sequence of actions
+generated by a controller (Baker et al., 2017; Zoph and Le, 2017). The validation accuracy
+of the sampled architectures after training is used as a reward signal to update the con-
+troller in order to maximize its expected value. See Algorithm 1. The controller is usually
+a recurrent neural network (RNN) (Zoph and Le, 2017; Zoph et al., 2018) that outputs a
+sequence of components corresponding to an architecture. After each outputted architec-
+ture is trained and evaluated, the RNN parameters are updated to maximize the expected
+validation accuracy of outputted architectures, using REINFORCE (Williams, 1992; Zoph
+and Le, 2017) or proximal policy optimization (Schulman et al., 2017; Zoph et al., 2018).
+ENAS (Pham et al., 2018) follows a similar strategy but speeds up the reward estimation
+using weight sharing; we will discuss this in detail in Section 4.
+More recently, RL has not been used prominently for NAS, since it has been shown to
+be outperformed in head-to-head comparisons by evolutionary methods (Real et al., 2019)
+and Bayesian optimization (Ying et al., 2019), which we will discuss next.
+12
+
+Neural Architecture Search: Insights from 1000 Papers
+Algorithm 2 General Evolutionary NAS Algorithm
+Input: Search space A, number of iterations T.
+Randomly sample and train a population of architectures from the search space A.
+for t = 1, . . . , T do
+Sample (based on accuracy) a set of parent architectures from the population.
+Mutate the parent architectures to generate children architectures, and train them.
+Add the children to the population, and kill off the architectures that are the oldest
+(or have the lowest accuracy) among the current population.
+end for
+Output: Architecture from the population with the highest validation accuracy.
+3.3 Evolutionary and Genetic Algorithms
+Decades before the recent NAS resurgence, one of the first works in NAS used an evolution-
+ary algorithm (Miller et al., 1989). In other early works, it was common to use evolutionary
+algorithms to simultaneously optimize the neural architecture and its weights (Angeline
+et al., 1994; Floreano et al., 2008; Stanley and Miikkulainen, 2002; Stanley et al., 2009).
+Today, evolutionary algorithms are still popular for the optimization of architectures due to
+their flexibility, conceptual simplicity, and competitive results (Real et al., 2019), but the
+weight optimization is typically left to standard SGD-based approaches.
+Evolutionary NAS algorithms work by iteratively updating a population of architectures.
+In each step, one or more “parent” architectures in the population are sampled (typically
+based on the validation accuracy of the architectures), combined and mutated to create new
+“children” architectures. These architectures are then trained and added to the population,
+replacing individuals in the population with worse performance. See Algorithm 2.
+There are many other ways in which evolutionary algorithms differ, including sampling
+the initial population, selecting the parents, and generating the children.
+For selecting
+the initial population, approaches include using trivial architectures (Real et al., 2017),
+randomly sampling architectures from the search space (Real et al., 2019; Sun et al., 2019),
+or using hand-picked high-performing architectures (Fujino et al., 2017).
+Selecting parents from the population makes up one of the core components of the
+evolutionary algorithm. Perhaps the most popular method to sample parents is tournament
+selection (Almalaq and Zhang, 2018; Goldberg and Deb, 1991; Real et al., 2017, 2019;
+Sun et al., 2019, 2020), which selects the best architecture(s) out of a randomly sampled
+population. Other common approaches include random sampling weighted by fitness (Gibb
+et al., 2018; Loni et al., 2020; Song et al., 2020; Xie and Yuille, 2017), or choosing the current
+best architecture(s) as parents (Elsken et al., 2017; Suganuma et al., 2017, 2018). These
+methods trade off exploration vs. exploiting the best region found so far. One particularly
+successful evolutionary algorithm is regularized evolution by Real et al. (2019). This is a
+fairly standard evolutionary method, with the novelty of dropping the architecture in each
+step that has been in the population for longest, even if it has the highest performance. This
+method outperformed random search and RL in a head-to-head comparison and achieved
+state-of-the-art performance on ImageNet at the time of its release (Real et al., 2019).
+13
+
+White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter
+Algorithm 3 General Bayesian Optimization NAS Algorithm
+Input: Search space A, number of iterations T, acquisition function φ.
+Randomly sample and train a population of architectures from the search space A.
+for t = 1, . . . , T do
+Train a surrogate model based on the current population.
+Select architecture at by maximizing φ (a) , based on the surrogate model.
+Train architecture at and add it to the current population.
+end for
+Output: Architecture from the population with the highest validation accuracy.
+3.4 Bayesian Optimization
+Bayesian optimization (BO, see, e.g. Frazier (2018) or Garnett (2023)) is a powerful method
+for optimizing expensive functions, and it has seen significant success within NAS. There
+are two key components to BO: (1) building a probabilistic surrogate to model the unknown
+objective based on past observations, and (2) defining an acquisition function to balance
+the exploration and exploitation during the search. BO is an iterative algorithm which
+works by selecting the architecture that maximizes the acquisition function (computed us-
+ing the surrogate), training this architecture, and retraining the surrogate using this new
+architecture to start the next iteration. See Algorithm 3.
+Initial BO-based NAS techniques developed custom distance metrics among architec-
+tures, for example, with a specialized architecture kernel (Swersky et al., 2014), an opti-
+mal transport-inspired distance function (Kandasamy et al., 2018), or a tree-Wasserstein
+distance function (Nguyen et al., 2021), allowing a typical Gaussian process (GP) based
+surrogate with BO. However, using a standard GP surrogate often does not perform well
+for NAS, as search spaces are typically high-dimensional, non-continuous, and graph-like.
+To overcome this, one line of work first encodes the architectures, using encodings discussed
+in Section 2.6, and then trains a model, such as a tree-Parzen estimator (Bergstra et al.,
+2011; Falkner et al., 2018), random forest (Hutter et al., 2011; Ying et al., 2019), or neural
+network (Springenberg et al., 2016; White et al., 2021a). Another line of work projects
+architecture information into a low-dimensional continuous latent space on which conven-
+tional BO can be applied effectively (Ru et al., 2020b; Wan et al., 2022a). Another class
+of surrogate models use graph neural networks (Ma et al., 2019; Ru et al., 2021; Shi et al.,
+2020) or a graph-based kernel (Ru et al., 2021) to naturally handle the graph representation
+of architectures without the need for an explicit encoding.
+The acquisition function, which trades off exploration and exploitation during the search,
+is another important design component for BO. There are various types of acquisition func-
+tions used in NAS, such as expected improvement (Jones et al., 1998; Moˇckus, 1975), upper
+confidence bound (Cox and John, 1992; Srinivas et al., 2010) and information-theoretic ones
+(Hennig and Schuler, 2012; Hern´andez-Lobato et al., 2014; Hvarfner et al., 2022; Wang and
+Jegelka, 2017). In NAS, optimizing the acquisition function in each round of BO is chal-
+lenging due to the non-continuous search spaces, and furthermore, exhaustively evaluating
+acquisition function values on all possible architectures is computationally non-viable. The
+most common method for optimizing the acquisition function in NAS is by randomly mu-
+tating a small pool of the best architectures queried so far, and of the mutated architectures,
+14
+
+Neural Architecture Search: Insights from 1000 Papers
+selecting the one(s) with the highest acquisition function value (Kandasamy et al., 2018;
+Ma et al., 2019; Ru et al., 2021; Schneider et al., 2021; Shi et al., 2020; White et al., 2021a).
+Other methods for optimizing the acqusition function include local search, evolutionary
+search, and random search (Ru et al., 2021; Shi et al., 2020; Ying et al., 2019).
+3.5 Monte Carlo Tree Search
+Another class of NAS methods is based on Monte Carlo Tree Search (MCTS). MCTS is the
+key backbone search algorithm used in AlphaGO (Silver et al., 2016) and AlphaZero (Silver
+et al., 2017), which achieve super-human performance in Go and chess, respectively. MCTS
+finds optimal decisions by recursively sampling new decisions (e.g., making a move in chess,
+or selecting an operation for an architecture in NAS), running stochastic rollouts to obtain
+the reward (such as winning a chess game, or discovering a high-performing architecture)
+and then backpropagating to update the weight of the initial decision. Across iterations,
+the algorithm builds a decision tree to bias the search towards more promising regions by
+balancing exploration and exploitation in decision making (Browne et al., 2012).
+MCTS was first applied to NAS by Negrinho and Gordon (2017) who represented the
+search space and its hyperparameters using a modular language. This results in a tree-
+structured, extensible search space, contrary to the fixed search spaces of prior work. Wis-
+tuba (2018) introduced a similar method but with two different UCT (Upper Confidence
+bounds applied to Trees) algorithms. MCTS was first adapted to cell-based search spaces by
+using a state-action representation (Wang et al., 2018). The authors also improved sample
+efficiency by using a neural network to estimate the accuracy of sampled architectures, thus
+enabling a higher number of rollouts. This was followed up by adding further efficiency
+in pruning the tree by learning partitionings (Wang et al., 2020b), and by application to
+multi-objective NAS (Zhao et al., 2021a).
+4. One-Shot Techniques
+Throughout Section 3, we have seen that the predominant methodology in the early stages
+of NAS research was to iteratively sample architectures from the search space, train them,
+and use their performance to guide the search. The main drawback of these methods, when
+applied without speedup techniques, is their immense computational cost, sometimes on
+the order of thousands of GPU days (Real et al., 2019; Zoph and Le, 2017) due to the need
+to train thousands of architectures independently and from scratch.3
+As an alternative, one-shot techniques were introduced to avoid training each architec-
+ture from scratch, thus circumventing the associated computational burden. As of 2022,
+they are currently one of the most popular techniques in NAS research. Rather than train-
+ing each architecture from scratch, one-shot approaches implicitly train all architectures in
+the search space via a single (“one-shot”) training of a hypernetwork or supernetwork.
+A hypernetwork is a neural network which generates the weights of other neural net-
+works (Schmidhuber, 1992), while a supernetwork (often used synonymously with “one-shot
+3. On the other hand, recent developments in performance estimation and speed-up techniques (Section 5)
+have significantly improved the computational overhead of methods that use black-box optimization as
+a base, making these methods affordable for many applications and users.
+15
+
+White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter
+All operation
+candidates
+Supernet
+…
+Subnet
+Figure 5: A supernet comprises all possible architectures in the search space. Each archi-
+tecture is a subnetwork (subgraph) in the supernet.
+model” in the literature) is an over-parameterized architecture that contains all possible ar-
+chitectures in the search space as subnetworks (see Figure 5). The idea of a supernetwork
+was introduced by Saxena and Verbeek (2016) and was popularized in 2018 by works such
+as Bender et al. (2018), Pham et al. (2018), and Liu et al. (2019c).
+Once a supernet is trained, each architecture from the search space can be evaluated by
+inheriting its weights from the corresponding subnet within the supernet. The reason for
+the scalability and efficiency of supernets is that a linear increase in the number of candidate
+operations only causes a linear increase in computational costs for training, but the number
+of subnets in the supernet increases exponentially. Therefore, supernets allow us to train
+an exponential number of architectures for a linear compute cost.
+A key assumption made in one-shot approaches is that when using the one-shot model to
+evaluate architectures, the ranking of architectures is relatively consistent with the ranking
+one would obtain from training them independently. The extent to which this assumption
+holds true has been substantially debated, with work showing evidence for (Li et al., 2021c;
+Pham et al., 2018; Yu et al., 2020) and against (Pourchot et al., 2020; Sciuto et al., 2020;
+Zela et al., 2020b; Zhang et al., 2020b) the claim across various settings. The validity of the
+assumption is dependent on the search space design, the techniques used to train the one-
+shot model, and the dataset itself, and it is hard to predict to what degree the assumption
+will hold in a particular case (Sciuto et al., 2020; Zhang et al., 2020b).
+While the supernet allows quick evaluation of all architectures, we must still decide on a
+search strategy, which can be as simple as running a black-box optimization algorithm while
+the supernet is training (such as in Pham et al. (2018)) or after the supernet is trained (such
+as in Bender et al. (2018)). We discuss these families of techniques in Section 4.1. A popular
+line of work uses gradient descent to optimize the architecture hyperparameters in tandem
+with training the supernet (such as DARTS (Liu et al., 2019c) and numerous subsequent
+methods). We discuss this family of techniques in Section 4.2. Finally, in Section 4.3, we
+discuss hypernetworks. Figure 6 provides a taxonomy of one-shot families.
+16
+
+Neural Architecture Search: Insights from 1000 Papers
+Hypernetwork Methods
+e.g. SMASH, GHNN
+Non-Differentiable
+Optimization
+e.g. OFA
+Supernetwork Methods
+e.g. DARTS, OFA
+DARTS “fixes”:
+Operation Biases
+e.g. DARTS-PT
+Rank Disorder
+e.g. SGAS
+High Memory
+e.g. PC-DARTS
+Poor Generalization
+e.g. Robust-DARTS
+Differentiable
+Optimization
+e.g. DARTS
+One-Shot
+Methods
+Figure 6: A taxonomy of the predominant one-shot families. A hypernetwork is a neural
+net which generates the weights of other neural nets. A supernetwork is an over-
+parameterized neural net that contains the set of neural nets from the search space
+as subnetworks, and it can be used with differentiable optimization (including
+DARTS and follow-ups), or non-differentiable optimization.
+4.1 Non-Differentiable Supernet-Based Methods
+We start by describing supernet-based methods which do not make use of differentiable
+optimization. Some methods in this family decouple the supernet training and architecture
+search: first train a supernet, and then run a black-box optimization algorithm to search
+for the best architecture. Other methods train a supernet while simultaneously running a
+non-differentiable search algorithm, such as reinforcement learning, to select subnetworks.
+Bender et al. (2018), Li and Talwalkar (2019), and Guo et al. (2020b) propose simple
+methods to train the supernet and then use a black-box optimization algorithm to extract
+the best architecture from it.
+Bender et al. (2018) construct the supernet by creating
+a separate node corresponding to an operation, in every place where there is a choice
+of operation; they then train the supernet as if it were a standard neural net, with one
+exception: nodes are randomly dropped during training, with the level of dropout increasing
+linearly throughout training. In follow-up work, Li and Talwalkar (2019) and Guo et al.
+(2020b) take this idea a step further: in each training step, they randomly sample one
+architecture and only update the weights of the supernet corresponding to that architecture.
+These techniques better mimic what is happening at evaluation time: only a subnetwork is
+evaluated rather than the entire supernet. Furthermore, these procedures use significantly
+less memory than training all the weights of a supernet. Each method concludes by using the
+trained supernet to quickly evaluate architectures when conducting random search (Bender
+et al., 2018; Li and Talwalkar, 2019) or evolutionary search (Guo et al., 2020b).
+The
+architecture identified in the end is then trained from scratch.
+As will be discussed in Section 6.2, deploying neural nets in practice often comes with
+constraints on latency or memory. While the supernets considered thus far tend to only
+contain architectures of approximately the same size, Cai et al. (2020) propose a supernet
+containing subnetworks of various sizes. This Once-for-all (OFA) approach uses a progres-
+sive shrinking strategy which starts by sampling the largest subnetworks, and then moving
+17
+
+White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter
+Algorithm 4 DARTS - Differentiable Architecture Search
+Input: Search space A, number of iterations T, hyperparameter ξ.
+Randomly initialize a one-shot model based on A with weights w and architecture hy-
+perparameters α.
+for t = 1, . . . , T do
+Perform a gradient update on the architecture weights α according to Equation 1.
+Perform a gradient update on w according to ∇wLtrain(w, α).
+end for
+Output: Derive the final architecture by taking the argmax of α, across all operation
+choices, and then retrain this architecture from scratch.
+to smaller subnetworks, in order to minimize the co-adaptation among subnetworks and
+effectively train networks of different sizes “once for all”. In a subsequent search phase,
+architectures are selected based on different constraints on latency and memory. While
+Cai et al. (2020) uses random search for this search phase, Guo et al. (2020b) proposed to
+improve this approach further by using evolutionary search in the search phase.
+One of the earliest supernet-based approaches is ENAS (Efficient Neural Architecture
+Search) (Pham et al., 2018), which trains the supernet while running a search algorithm
+in tandem. Specifically, the search strategy is similar to the RL controller-based approach
+from Zoph and Le (2017) (described in Section 3.2) but estimates the performance of each
+architecture using a supernet. The training procedure alternates between selecting an archi-
+tecture, evaluating it, and updating the weights of the supernet, and updating the weights
+of the controller by sampling several architectures to estimate the reward of REINFORCE.
+While this approach searches for an architecture in tandem with training the supernet, it
+uses a separate controller network to guide the search.
+In the next section, we discuss
+methods which conduct the search via gradient descent using only the supernet.
+4.2 Differentiable Supernet-Based Methods
+In this section, we review supernet-based NAS methods that employ differentiable optimiza-
+tion techniques. We first describe the seminal DARTS (Differentiable Architecture Search)
+approach by Liu et al. (2019c), and then we move to various follow-up works and other
+differentiable approaches.
+The DARTS approach uses a continuous relaxation of the discrete architecture search
+space, which enables the use of gradient descent in order to find a high-performing local
+optimum significantly faster than black-box optimization methods. It can be applied to any
+DAG-based search space which has different choices of operations on each edge by using a
+“zero” operation to simulate the absence of an edge.
+At the start, each edge (i, j) in the DARTS search space consists of multiple possible
+candidate operations o, each of which are associated with a continuous hyperparameter
+α(i,j)
+o
+∈ [0, 1]. While the supernet is training, edge (i, j) consists of a mix of all candidate
+operations, weighted by each α(i,j)
+o
+.
+The architecture hyperparameters α are optimized
+jointly with the supernet model weights w via alternating gradient descent. In particular,
+in order to update the architecture weights α via gradient descent, DARTS makes use of
+18
+
+Neural Architecture Search: Insights from 1000 Papers
+Joint Optimization of
+Weights and Architecture
+Hyperparameters
+Operation
+candidates
+Output
+Input
+…
+x N
+Discretization
+Output
+Input
+…
+x N
+Randomly Initialized Architecture
+Hyperparameters
+Output
+Input
+…
+x N
+Re-training From Scratch
+…
+x > N
+Input
+Output
+Figure 7: Differentiable one-shot NAS algorithms have four main steps: randomly initializ-
+ing the architecture hyperparameters, optimizing the architecture hyperparame-
+ters and weights via alternating gradient descent, discretizing the optimized archi-
+tecture hyperparameters, and re-training the resulting subnetwork from scratch.
+the following approximation:
+∇αLval (w∗(α), α) ≈ ∇αLval (w − ξ∇wLtrain(w, α), α) ,
+(1)
+where Ltrain denotes the training loss, Lval denotes the validation loss, ξ is the learning
+rate, and w∗(α) denotes the weights that minimize the training loss of the architecture
+corresponding to α. In other words, in order to avoid the expensive inner optimization,
+w∗(α) is approximated by a single step of gradient descent (w − ξ∇wLtrain(w, α)). This is
+similar to MAML (Finn et al., 2017) and other works (Luketina et al., 2016; Metz et al.,
+2017). Although this strategy is not guaranteed to converge, Liu et al. (2019c) showed
+that it works well in practice with a suitable choice of ξ. After the training phase, DARTS
+obtains a discrete architecture by selecting the operation with the maximum value of α on
+each edge (the discretization step) and then re-trains it from scratch. Figure 7 provides an
+illustration of DARTS.
+DARTS gained significant attention in the AutoML community due to its simplicity,
+its novelty, and the release of easy-to-use code. Furthermore, the original technique left
+room for improvement across various axes. Consequently, there has been a large body of
+follow-up work seeking to improve various parts of the DARTS approach. In the rest of the
+section, we cover the main categories of improvements (see Figure 6).
+4.2.1 Rank Disorder
+As mentioned at the start of Section 4, nearly all one-shot methods make a key assumption:
+the ranking of architectures evaluated with the supernet is relatively consistent with the
+ranking one would obtain from training them independently; when this assumption is not
+19
+
+VαLval (w*(α), α) ~ VαLval (w - $VwLtrain(w, α), α)(i,j)
+maxWhite, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter
+met, it is known as rank disorder (Li et al., 2021c; Sciuto et al., 2020). While there is
+considerable debate both for (Li et al., 2021c; Pham et al., 2018; Yu et al., 2020) and
+against (Pourchot et al., 2020; Sciuto et al., 2020; Zela et al., 2020b; Zhang et al., 2020b)
+the assumption, many works have attempted to reduce the problem of rank disorder.
+Several methods propose to gradually increase the network depth, or to gradually prune
+the set of operation candidates during training, showing that this causes the weights to
+better adapt to the most-promising operation choices. Progressive-DARTS (Chen et al.,
+2019a) gradually increases the network depth while simultaneously pruning the operations
+with the smallest weights. SGAS (Li et al., 2020a) chooses operations throughout the train-
+ing procedure, based on two criteria: selection certainty (calculated via the entropy of the
+operation distribution) and selection stability (calculated via the movement of the operation
+distribution). Finally, XNAS (Nayman et al., 2019) makes use of the exponentiated gradi-
+ent algorithm (Kivinen and Warmuth, 1997), which dynamically prunes inferior operation
+choices during the search while also allowing the recovery of “late bloomers”, i.e., operation
+choices which only become accurate later in the training procedure.
+4.2.2 Operation Biases
+Several works show that differentiable NAS techniques tend to favor skip connections over
+other operation choices (Liang et al., 2019; Wang et al., 2021; Zela et al., 2020a), which
+might be caused by the supernet using skip connections to over-compensate for vanishing
+gradients (Chu et al., 2021). Various methods have been proposed to fix this bias.
+DARTS+ (Liang et al., 2019) proposes an early stopping method based on the stability
+of the ranking of the architecture weights, while DARTS− (Chu et al., 2021) separates
+the skip connection weights from other operation weights via auxiliary edges. FairDARTS
+(Chu et al., 2020) sets all operation weights independent of all others, and then pushes
+these architecture weights toward zero or one in the loss function.
+Taking a different approach, Wang et al. (2021) show that it is okay for skip connections
+to have higher weights, as long as we do not select the final architecture based on these
+weights.
+Instead, after training the supernet, their algorithm, DARTS-PT, selects each
+operation whose removal has the largest decrease of accuracy in the supernet.
+Rather than fixing the biases among a small hand-picked set of operations, Shen et al.
+(2022) instead use a search space that significantly reduces human bias: they fix a standard
+convolutional network and search for the kernel sizes and dilations of its operations. This
+simple approach is broadly applicable across computer vision, PDE solving, protein folding,
+and other tasks. In order to make one-shot training more efficient, their algorithm, DASH,
+computes the mixture-of-operations using the Fourier diagonalization of convolution.
+4.2.3 Poor Test Generalization
+Several works seek to improve the generalization performance of DARTS through various
+means. Zela et al. (2020a) and Chen and Hsieh (2020) show that DARTS often converges to
+sharp local minima in the loss landscape (high validation loss curvature in the architecture
+hyperparameter space), which, after running the discretization step, can cause the algo-
+rithm to return an architecture with poor test generalization. Robust-DARTS (Zela et al.,
+2020a) fixes this issue by making the training more robust through data augmentation, L2
+20
+
+Neural Architecture Search: Insights from 1000 Papers
+regularization of the inner objective Ltrain, and early stopping. Similarly, rather than op-
+timizing the training loss, Smooth-DARTS (Chen and Hsieh, 2020) optimizes the expected
+or worst-case training loss over a local neighborhood of the architecture hyperparameters.
+Taking a different approach, GAEA (Li et al., 2021c), XD (Roberts et al., 2021), and
+StacNAS (Guilin et al., 2019) all use a single-level optimization rather than the typical
+bi-level optimization, by treating the architecture hyperparameters as normal architecture
+weights, showing this leads to better generalization. Furthermore, GAEA re-parameterizes
+the architecture parameters over the simplex and updates them using the exponentiated
+gradient algorithm (similar to XNAS from Section 4.2.1), showing this is better-suited to
+the underlying geometry of the architecture search space.
+Finally, Amended-DARTS (Bi et al., 2019) and iDARTS (Zhang et al., 2021a) both take
+the approach of deriving more accurate approximations of the gradients of α (Equation 1),
+showing that this leads to a more stable optimization and better generalization.
+4.2.4 High Memory Consumption
+The memory required to train a supernet is much higher than a normal neural net—it
+scales linearly with the size of the set of candidate operations. Recall from Section 4.1 that
+multiple works reduced this memory by, in each training step, masking out all operations
+except for the ones corresponding to one or a few subnetworks. Various works have proposed
+techniques to mask out operations for differentiable NAS as well, i.e., while simultaneously
+optimizing the architecture hyperparameters.
+Cai et al. (2019) proposed ProxylessNAS, which solves this problem by modifying the
+BinaryConnect (Courbariaux et al., 2015) discretization method: in each training step, for
+each operation choice, all are masked out except one operation that is randomly chosen
+with probability proportional to its current value of α. Cai et al. (2019) show that this
+procedure converges to a single high-performing subnetwork.
+GDAS (Dong and Yang,
+2019) and DSNAS (Hu et al., 2020; Xie et al., 2018) use a Gumbel-softmax distribution
+over a one-hot encoding of the operation choices, which is a different way to allow sampling
+single operations in each training step while maintaining differentiability.
+PC-DARTS (Xu et al., 2019b) proposes a relatively simpler approach: at each training
+step, and for each edge in the DAG, a subset of channels is sampled and sent through
+the possible operations, while the remaining channels are directly passed on to the output.
+While reducing memory due to training fewer channels, this also acts as a regularizer.
+DrNAS (Chen et al., 2021f) also reduces memory consumption by progressively increasing
+the number of channels that are forwarded to the mixed operations, and progressively
+pruning operation choices, modeled by a Dirichlet distribution.
+4.3 Hypernetworks
+A hypernetwork is a neural network which generates the weights of other neural networks.
+Hypernetworks were first considered by Schmidhuber (1992, 1993), and the first modern
+application was by Ha et al. (2017), who used them to obtain better weights for a fixed
+LSTM architecture. Hypernetworks have since been used for a variety of tasks, including
+HPO (Mackay et al., 2019; Navon et al., 2021), calibrating model uncertainty (Krueger
+et al., 2017), and NAS (Brock et al., 2018; Zhang et al., 2018).
+21
+
+White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter
+The first work to use hypernetworks for NAS (and among the first to use a one-shot
+model for NAS) was SMASH (one-Shot Model Architecture Search through Hypernetworks)
+(Brock et al., 2018). SMASH consists of two phases: first, train a hypernetwork to output
+weights for any architecture in the search space. Next, randomly sample a large set of
+architectures, generate their weights using the hypernetwork, and output the one with the
+best validation accuracy. The hypernetwork, a convolutional neural net, takes as input an
+architecture encoding and outputs a set of weights for that architecture, and is trained by
+randomly sampling an architecture, generating its weights, computing its training error,
+and then backpropagating through the entire system (including the hypernetwork weights).
+Another hypernet-based NAS algorithm is GHN (Graph Hypernetworks) (Zhang et al.,
+2018). The main difference between SMASH and GHN is the architecture encoding and the
+architecture of the hypernetwork. Specifically, the GHN hypernetwork is a mix between a
+graph neural network and a standard hypernetwork. It takes as input the computational
+graph of an architecture a and uses message-passing operations which are typical in GNNs,
+to output the weights of a. The training of the hypernetwork, and the final NAS algorithm,
+are both the same as in SMASH.
+5. Speedup Techniques
+In this section, we cover general speedup techniques for NAS algorithms, including per-
+formance prediction (Section 5.1), multi-fidelity methods (Section 5.2), meta-learning ap-
+proaches (Section 5.3), and weight inheritance (Section 5.4).
+5.1 Performance Prediction
+A large body of work has been devoted to predicting the performance of neural networks
+before they are fully trained. Such techniques have the potential to greatly speed up the
+runtime of NAS algorithms, since they remove the need to fully train each architecture under
+consideration. These speedup techniques can improve nearly all types of NAS algorithms,
+from black-box optimization (Ru et al., 2020a; White et al., 2021c) to one-shot NAS (Xiang
+et al., 2021). In this section, we discuss the performance prediction techniques themselves,
+while in Section 5.2, we discuss methods of incorporating them into NAS algorithms.
+Formally, given a search space A and architecture a ∈ A, denote the final validation
+accuracy obtained with a fixed training pipeline as f(a). A performance predictor f′ is
+defined as any function which predicts the accuracy or relative accuracy of architectures,
+without fully training them. In other words, evaluating f′(a) takes less time than evaluating
+f(a) , and {f′(a) | a ∈ A} ideally has high correlation or rank correlation with {f(a) | a ∈
+A} .
+In the rest of this section, we give an overview of different types of performance
+predictors, including learning curve extrapolation (Section 5.1.1), zero-cost proxies (Section
+5.1.2), and other methods (Section 5.1.3). Note that surrogate models (Section 3.4) and
+one-shot models (Section 4) can also be seen as types of performance predictors.
+5.1.1 Learning Curve Extrapolation
+Learning curve extrapolation methods seek to predict the final performance of a given
+architecture after partially training it, by extrapolating from its so-called partial learning
+22
+
+Neural Architecture Search: Insights from 1000 Papers
+Learning Curve
+Extrapolation
+Zero-Cost Proxies
+Subset Selection
+Data
+Weibull
+Log log linear
+Log power
+Janoschek
+Epochs
+Accuracy
+Figure 8: Illustration of the main types of performance predictors: extrapolating the vali-
+dation accuracy learning curve via a parameteric model (left), assessing the gen-
+eralizability of an architecture with a single forward pass of a single minibatch of
+data (middle), and training the architeture on a subset of the data (right).
+curve (the series of validation accuracies at all epochs so far). This can, e.g., be accomplished
+by fitting the partial learning curve to a parametric model (Domhan et al., 2015) (see
+Figure 8 (left)). Learning curve extrapolation methods can also be used together with a
+surrogate model: in that case, the model takes as input both an encoding of a and a partial
+learning curve of a, and outputs a prediction f′(a) (Baker et al., 2018; Klein et al., 2017).
+Learning curve extrapolation methods can be used to speed up black-box NAS algorithms
+(Domhan et al., 2015; Ru et al., 2020a; Yan et al., 2021b) or in conjunction with multi-
+fidelity algorithms such as Hyperband or BOHB (described in Section 5.2).
+5.1.2 Zero-Cost Proxies
+Zero-cost proxies are a recently developed family of performance prediction techniques. The
+idea is to run a very fast computation (such as a single forward and backward pass of a
+single minibatch of data) over a set of architectures that assigns a score to each architecture,
+with the hope that the scores are correlated with the final accuracies (Mellor et al., 2021).
+These techniques get their “zero-cost” name since the overall time to score each architecture
+is negligible (often less than 5 seconds) compared to most other performance prediction
+techniques (Abdelfattah et al., 2021). While most zero-cost proxies compute architecture
+scores from a (single) minibatch of data, some are data-independent, computing the score
+solely from the initialized weights or number of parameters of the neural network.
+Zero-cost proxies were first introduced by Mellor et al. (2021), who estimated the relative
+performance of neural networks based on how well different linear regions of the network
+map are separated (see Figure 8 (middle)). Since the initial technique, several new zero-
+cost proxies have been introduced. Abdelfattah et al. (2021) made a connection to the
+pruning-at-initialization literature (Lee et al., 2019b; Tanaka et al., 2020; Theis et al., 2018;
+Wang et al., 2020a) and used this connection to introduce five zero-cost proxies. Their best-
+performing method, synflow (Tanaka et al., 2020), is a data-independent method which
+23
+
+White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter
+computes the L1 path-norm of the network: it computes the sum of the product of all
+initialized weights in each path connecting the input to the output.
+Since then, two other data-independent methods have been introduced, based on a series
+of synthetic proxy tasks to test scale invariances and spatial information (Li et al., 2021d),
+and based on approximating the neural network as a piecewise linear function (Lin et al.,
+2021). Other data-dependent methods make use of the neural tangent kernel (NTK) (Jacot
+et al., 2018), based on approximating its trace norm (Shu et al., 2021) or approximating its
+spectrum (Chen et al., 2021e).
+Although zero-cost proxies have received significant attention since they were first in-
+troduced, recent work has shown that simple baselines such as “number of parameters” and
+“FLOPs” are surprisingly competitive with all leading techniques. The main downsides of
+using zero-cost proxies are that they may be unreliable, especially on larger search spaces
+(Chen et al., 2022; Ning et al., 2021; White et al., 2022). They also may have biases, such as
+preferring larger models (Ning et al., 2021) or wide channels (Chen et al., 2022), although
+the biases can be removed (Krishnakumar et al., 2022).
+On the other hand, recent work encourages the viewpoint that zero-cost proxies are
+“weak learners” which can be combined with other techniques, including other zero-cost
+proxies, to improve performance (Krishnakumar et al., 2022; White et al., 2022). Initial
+work shows that zero-cost proxies can be successfully added to both Bayesian optimization-
+based NAS (Shen et al., 2021; White et al., 2021c) and one-shot NAS (Xiang et al., 2021).
+5.1.3 Other Low-Fidelity Predictions
+Beside training for fewer epochs, other works give a low-fidelity estimate of the final accuracy
+by training on a subset of the training data (or a smaller, synthetically generated dataset).
+This is visualized in Figure 8 (right).
+Multiple works have studied different subset selection algorithms, such as random sam-
+pling, entropy-based sampling (Na et al., 2021), clustering via core-sets (Shim et al., 2021),
+facility location (Prasad et al., 2022), and k-center (Na et al., 2021). Prasad et al. (2022)
+introduce adaptive subset selection to NAS, in which the subset is updated throughout
+training in order to maximize validation accuracy.
+Such et al. (2020) introduce generative teaching networks which use a small set of syn-
+thetic data to train neural networks much faster than using the original real training data.
+The synthetic data is created using a data-generating network to match the accuracy of a
+network trained on real data. A related method is synthetic petri dish (Rawal et al., 2020),
+which evaluates architecture motifs by placing them into a small neural network and then
+training them using a small synthetic dataset. This latter method also explicitly optimizes
+the correlation between architecture rankings with the approximation and the full training.
+5.2 Multi-Fidelity Algorithms
+While the previous section was devoted to methods of predicting the performance of neural
+networks, now we cover algorithms that use these methods to run NAS efficiently.
+Formally, the objective function f : X −→ R, which is typically expensive to fully eval-
+uate, can be cheaply approximated by a lower-fidelity version ˆf(·, b) of f(·), parameterized
+by the fidelity parameter b. When b = bmax, we retrieve the true function f(·) = ˆf(·, bmax).
+24
+
+Neural Architecture Search: Insights from 1000 Papers
+This is a generalization of the definition from Section 5.1. The fidelity parameter can denote
+the number of training epochs, training data subset size, and it can make use of perfor-
+mance prediction techniques from the previous section. One can even use multiple fidelity
+parameters at a time (Kandasamy et al., 2017; Zhou et al., 2020). Next, we describe the
+optimization algorithms that exploit access to multi-fidelity function estimates ˆf(·, b).
+SuccessiveHalving (SH) (Jamieson and Talwalkar, 2016) is one of the simplest multi-
+fidelity algorithms. It starts to train a large number of architectures, slowly killing off more
+and more architectures which are not promising based on lower fidelity evaluations, until
+only the most promising architectures are evaluated at the highest fidelity. The fidelity
+thresholds and number of architectures to promote to higher fidelities are controlled by a
+hyperparameter. A popular improvement to SH is Hyperband (HB) (Li et al., 2018), a
+multi-armed bandit strategy that repeatedly calls SH as a subroutine, using different values
+of the minimum budget for each call. Therefore, HB hedges its bets against any single
+choice of the minimum budget.
+While SH and HB are purely based on (smart) random search, recent works have com-
+bined HB with both Bayesian optimization and evolution. Bayesian optimization hyperband
+(BOHB) (Falkner et al., 2018; Lindauer et al., 2022) works similarly to HB in its first iter-
+ation, and on later iterations it fits a probabilistic surrogate model for each fidelity in order
+to make informed sampling decisions. Similarly, DEHB (Mallik and Awad, 2021) combines
+differential evolution (Storn and Price, 1997) with HB, significantly improving the later
+iterations of HB. ASHA (Li et al., 2020c) and ABOHB (Klein et al., 2020) improve SH and
+BOHB further, respectively, by making use of massively parallel asynchronous computation
+and early stopping strategies. Finally, EcoNAS (Zhou et al., 2020) proposes a hierarchi-
+cal evolutionary search method that partitions the search space into subsets and allocates
+increasing fidelities to the most promising architectures in each subset.
+5.3 Meta-Learning
+A majority of NAS approaches consider solving a single task from scratch, ignoring previ-
+ously explored solutions. However, this is in contrast to what both researchers and prac-
+titioners typically do. Often, architectures are transferred across datasets and even across
+tasks, and on a new task, researchers typically start with a state-of-the-art solution. So,
+one might ask: why run NAS from scratch rather than re-using information from, e.g., pre-
+vious experiments? This question naturally leads to the idea of meta-learning or learning
+to learn (Hochreiter et al., 2001; Schmidhuber, 1987; Thrun and Pratt, 1998), which aims
+at improving a learning algorithm by leveraging information from past, related experiments
+(Hospedales et al., 2021; Vanschoren, 2019).
+Wong et al. (2018) and Zimmer et al. (2021) employ meta-learning strategies in a more
+general automated machine learning setting. Since the focus is not on NAS, they both solely
+consider a small set of candidate architectures. In Wong et al. (2018), tasks are encoded in a
+similar fashion as word embeddings in NLP (Mikolov et al., 2013). In contrast, Zimmer et al.
+(2021) simply warm-start their search based on previously well-preforming configurations.
+Lian et al. (2020) and Elsken et al. (2020) focus on few-shot learning: the problem of
+learning a new task with just a few data points for training. The authors extend gradient-
+based, model-agnostic meta-learning approaches such as MAML (Finn et al., 2017) and
+25
+
+White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter
+REPTILE (Nichol et al., 2018) to not only meta-learning an initial set of weights for a
+fixed neural network architecture, but also to the architecture itself by incorporating a
+differentiable method such as DARTS (Liu et al., 2019c) into the meta-learning algorithm.
+The work by Lee et al. (2021) is neither restricted to few-shot learning nor to choosing
+architectures from a small set of candidates. Rather, they employ typical NAS search spaces
+such as the ones discussed in Section 2. The authors propose a novel set encoder to improve
+upon deep sets (Zaheer et al., 2017) and set transformers (Lee et al., 2019a). A graph neural
+network-based decoder is employed to generate neural architectures given a set encoding.
+Additionally, a graph neural network is employed to encode generated architectures. The
+architecture encoding in combination with the set encoding is then used to meta-learn a
+surrogate model to predict the performance of the architecture, dataset tuple. Shala et al.
+(2022) extend the work by Lee et al. (2021) by employing the dataset and architecture
+encodings within a Bayesian optimization framework, resulting in a probabilistic surrogate
+predictor. This further enables adapting the surrogate to datapoints seen at test time.
+5.4 Weight Inheritance and Network Morphisms
+While black-box optimization-based NAS algorithms train each architecture from scratch,
+and one-shot methods train all architectures with the same set of weights, a line of work
+proposes an in-between solution: reuse the weights of trained architectures on similar un-
+trained architectures. This idea is especially helpful for black-box optimization approaches
+that apply only small, sequential changes to architectures when generating a new candidate
+architecture. For example, Real et al. (2017) propose to copy the weights of all layers that
+have not been affected by applied mutations from the parent architecture to its offspring.
+This idea has also been extended by the concept of network morphisms (Chen et al.,
+2016; Wei et al., 2016). Network morphisms are operators acting on the space of neural
+network architectures. They change the architecture of a neural network without changing
+the function they represent, i.e., given an arbitrary input, the output remains identical for
+the original architecture and the architecture having been modified by a network morphism.
+This is typically achieved by properly initializing the modified architecture. Network mor-
+phisms have been employed in evolutionary algorithms (Elsken et al., 2017, 2019a; Schorn
+et al., 2020; Wistuba, 2019), reinforcement learning (Cai et al., 2018a,b), Bayesian opti-
+mization (Jin et al., 2019b), and even one-shot methods (Fang et al., 2020).
+6. Extensions
+The previous sections studied the main techniques from the classic instantiation of NAS. In
+this section, we survey a few common extensions: joint NAS + HPO, constrained/multi-
+objective NAS, and neural ensemble search.
+6.1 Joint NAS + HPO
+While a large body of the NAS literature assumes fixed hyperparameters in their experimen-
+tal setup, it has been shown – perhaps not very surprisingly – that hyperparameters also
+play a significant role. For example, on the DARTS search space, tuning hyperparameters
+can lead to a huge improvement, exceeding the performance gains obtained by NAS (Yang
+26
+
+Neural Architecture Search: Insights from 1000 Papers
+et al., 2020). However, the best hyperparameters may vary significantly across architectures
+even in the same search space (Yang et al., 2020). Therefore, a recent body of work seeks to
+overcome these challenges and give efficient algorithms for NAS + HPO (Dai et al., 2021;
+Dong et al., 2020; Izquierdo et al., 2021; Zela et al., 2018; Zhou et al., 2021).
+Running joint NAS + HPO is significantly more challenging than running NAS or HPO
+in isolation. First, the complexity of the search space is substantially increased, due to the
+increased number of hyperparameters and the heterogeneity of the hyperparameters. Sec-
+ond, the interaction between architectures and training hyperparameters in terms of network
+performance is difficult to model. Furthermore, some hyperparameters can have different
+effects on the performance under different evaluation budgets, reducing the effectiveness of
+many multi-fidelity and performance prediction techniques.
+In light of these challenges, several solutions have been proposed. Various methods have
+been introduced to homogenize the search space, such as reformulating NAS as an HPO
+problem with categorical hyperparameters (Zela et al., 2018), or standardizing the repre-
+sentation of the NAS and HPO hyperparameters by assigning continuous-valued coefficients
+in [0, 1] (Dong et al., 2020). The search strategies resemble standard NAS algorithms such
+as BO (Dai et al., 2021; Izquierdo et al., 2021; Zela et al., 2018), evolution (Dai et al., 2021;
+Izquierdo et al., 2021), or REINFORCE with weight sharing (Dong et al., 2020).
+6.2 Constrained and Multi-Objective NAS
+Although NAS has been very popular in recent years, most work focuses on solely optimizing
+for a single objective, typically the accuracy or error rate. However, there are many settings
+for which this is not sufficient, such as when the neural network must be deployed on an
+edge device or must satisfy a legal definition of fairness.
+In such applications, we may
+need to constrain the latency, memory usage, or rate of errors across classes (Sukthanker
+et al., 2022). There has been particular interest in constraints related to edge devices and
+other hardware, termed hardware-aware NAS (Benmeziane et al., 2021). To achieve one or
+more objectives in addition to accuracy, the standard NAS objective is typically modified
+to either a constrained optimization problem (e.g., Bender et al. (2020); Cai et al. (2019);
+Tan et al. (2019)) or a multi-objective optimization problem (e.g., Elsken et al. (2019a); Hu
+et al. (2019); Izquierdo et al. (2021); Lu et al. (2019, 2020)).
+In constrained optimization, one tries to solve the following equation:
+min
+a∈A f(a) subject to hi(a) ≤ ci for i ∈ {1, . . . , k}
+(2)
+where f(a) denotes, as before, the original objective function (e.g., validation error), and
+hi represent hardware constraints as a function of the architecture. This problem is often
+solved by a transform into an additive or multiplicative unconstrained problem such as
+mina∈A f(a)+�
+i λigi(a) with penalty functions gi penalizing architectures not satisfying the
+constraints, e.g., gi(a) = max
+�
+0, hi(a)−ci
+�
+and hyperparamters λi trading off the objectives
+and constraints. This single-objective optimization problem is then solved using black-box
+optimization methods or one-shot methods. In the latter case, the penalty functions gi
+needs to be differentiable, which is often not the case. Therefore, discrete metrics such
+as latency are relaxed to continuous variables through various techniques, such as with a
+Gumbel softmax function (Wu et al., 2019b).
+27
+
+White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter
+In multi-objective optimization, the requirements in Equation 2 are treated as separate
+objectives that are optimized along with the original objective:
+min
+a∈A
+�
+f(a), h1(a), . . . , hk(a)
+�
+.
+While this can again be reduced to a single-objective problem via scalarization methods,
+another common approach is to search for a set of non-dominated solutions that are op-
+timal in the sense that one cannot reduce any objective without increasing at least one
+other objective. The set of non-dominated solutions is called the Pareto front. The most
+common approach in this case is to employ multi-objective evolutionary algorithms which
+maintain a population of architectures and aim to improve the Pareto front obtained from
+the current population by evolving the current population (Elsken et al., 2019a; Hu et al.,
+2019; Izquierdo et al., 2021; Lu et al., 2019). Multi-objective evolutionary algorithms have
+also been used in combination with weight sharing within one-shot models (Lu et al., 2020;
+Mu˜noz et al., 2022).
+One of the most widely-studied constrained NAS problems is regarding hardware effi-
+ciency such as memory or latency, and many works have been devoted to efficiently approx-
+imating hardware metrics of interest. While simple metrics such as number of parameters
+are easily computed, these are often not correlated enough with other metrics of interest
+such as memory or latency. Other solutions include computing hardware costs modularly
+as the sum of the hardware cost of each operation (Cai et al., 2019) or by using a surrogate
+model that predicts hardware costs (Dudziak et al., 2020; Laube et al., 2022).
+6.3 Neural Ensemble Search
+While the goal of neural architecture search is to return the best standalone architecture,
+ensembling methods are popular within the deep learning community for their robust pre-
+dictions and their easy uncertainty quantification. A newly emerging extension of NAS
+is concerned with finding the best ensemble of neural networks with diverse architectures,
+which can outperform standard NAS in terms of accuracy, uncertainty calibration, and
+robustness to dataset shift (Zaidi et al., 2021). Neural ensemble search is defined as follows:
+min
+a1,...,aM∈ALval (Ensemble ((w∗(a1), a1), . . . , (w∗(aM), aM)))
+(3)
+s.t.
+w∗(a) = argminw Ltrain (w, a)
+∀a ∈ A,
+where Ensemble is the function which aggregates the outputs of f1, . . . , fM. Note that the
+search space cardinality is |A|M rather than |A| as in standard NAS.
+Zaidi et al. (2021) propose two simple yet effective procedures based on random search
+and regularized evolution (Real et al., 2019) that search for architectures that optimize
+Equation 3. Despite their effectiveness, these algorithms take considerable computation
+due to the black-box nature of the optimization algorithms. Multi-headed NES (Narayanan
+et al., 2021) circumvents this issue by applying differentiable NAS methods on the heads
+of a multi-headed network. The heads are explicitly tuned to optimize the ensemble loss
+together with a diversity component that encourages uncorrelated predictions coming from
+the individual heads.
+Other works have set up neural ensemble search with a one-shot
+model for the entire architecture. NESBS (Neural Ensemble Search via Bayesian Sampling)
+28
+
+Neural Architecture Search: Insights from 1000 Papers
+(Shu et al., 2022) propose to use a supernet to estimate the ensemble performance of inde-
+pendently trained base learners and then use Bayesian sampling to find a high-performing
+ensemble.
+NADS (Neural Architecture Distribution Search) (Ardywibowo et al., 2020)
+follows a similar line by training a supernet to optimize an objective that is tailored to
+provide better uncertainty estimates and out-of-distribution detection. Chen et al. (2021b)
+run evolutionary search on the supernet to find a high-performing ensemble.
+7. Applications
+Along with discovering improved architectures for well-known datasets, one of the primary
+goals of the field of NAS is to quickly and automatically find high-performing architectures
+for brand new datasets and tasks. Although the majority of the NAS literature focuses
+on image classification, there are numerous success stories for NAS applied to less well-
+known settings. In this section, we discuss a few of these successes, including graph neural
+networks, generative adversarial networks, dense prediction, and transformers.
+7.1 Graph Neural Networks
+Graph neural networks (GNNs) are designed to process data represented by graphs. Using
+NAS to design GNNs poses unique problems: the search space for GNNs is more complex
+than typical convolutional search spaces, and both NAS and GNNs are independently known
+for their large computational overhead.
+Zhou et al. (2019) initiated a line of work applying NAS to GNNs by defining a new
+search space with GNN-specific operations and then using a reinforcement learning strategy.
+Follow-up work designed similar search spaces (Gao et al., 2020b; Zhang et al., 2021b).
+with specialized features such as meta-paths (Ding et al., 2021b), edge features (Jiang and
+Balaprakash, 2020), or fast sampling operations (Gao et al., 2020b).
+Overall, the main difference between NAS for GNNs and more standard NAS settings
+lies in the construction of the search space. The main search strategies used by GNN NAS
+algorithms are typical NAS approaches: reinforcement learning (Gao et al., 2020b; Zhao
+et al., 2020a; Zhou et al., 2019), one-shot methods (Ding et al., 2021b; Zhao et al., 2020b),
+and evolutionary algorithms (Jiang and Balaprakash, 2020; Nunes and Pappa, 2020). For
+a detailed survey on NAS for GNNs, see Zhang et al. (2021b).
+7.2 Generative Adversarial Network
+Generative adversarial networks (GANs) (Goodfellow et al., 2014) are a popular choice for
+generative modeling in tasks such as computer vision. GANs make use of two separate
+networks training in tandem: a generator and a discriminator. Due to having two separate
+networks, and their notoriously brittle training dynamics (Gulrajani et al., 2017), GANs
+require special techniques for effective NAS.
+Different works have achieved improved performance via NAS by searching for only
+the generator architecture with a fixed discriminator (Doveh and Giryes, 2021), with a
+predefined progressively growing discriminator (Fu et al., 2020), or by searching both the
+generator and discriminator architectures simultaneously (Gong et al., 2019). The most
+popular choice of search space is the cell-based search space. The cell for the generator
+29
+
+White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter
+consists of a standard convolutional cell, with the addition of various upsampling operations
+(Ganepola and Wirasingha, 2021; Gong et al., 2019; Tian et al., 2020).
+The search techniques resemble the techniques used for standard NAS: reinforcement
+learning (Fu et al., 2020; Tian et al., 2020; Wang and Huan, 2019), one-shot NAS (Doveh and
+Giryes, 2021; Gao et al., 2020a; Lutz et al., 2018), and evolutionary algorithms (Kobayashi
+and Nagao, 2020), with scoring based on either Inception Score (IS) (Salimans et al., 2016)
+or Fr´echet Inception Distance (FID) (Heusel et al., 2017). For a comprehensive survey on
+NAS for GANs, see Ganepola and Wirasingha (2021).
+7.3 Dense Prediction Tasks
+Dense prediction for computer vision encompasses a variety of popular tasks such as seman-
+tic segmentation, object detection, optical flow, and disparity estimation, and it requires
+more complex architectures compared to standard image classification problems. For ex-
+ample, the architectures often include a decoder (Ronneberger et al., 2015), modules for
+generating multi-scale features (He et al., 2015) or task-specific heads (Girshick et al., 2014)
+in addition to the main network. Thus, NAS algorithms have been applied to search for
+these components, either in isolation (Chen et al., 2018; Ghiasi et al., 2019; Xu et al., 2019a)
+or jointly (Guo et al., 2020a; Yao et al., 2020), or by discovering novel design patterns (Du
+et al., 2020). For a survey on NAS for dense prediction, see Elsken et al. (2022).
+Once again, standard NAS techniques are used: Guo et al. (2020a); Liu et al. (2019a);
+Saikia et al. (2019); Xu et al. (2019a) employ gradient-based search via DARTS (Liu et al.,
+2019c); Du et al. (2020); Ghiasi et al. (2019) use RL; Bender et al. (2020) is inspired by
+ProxylessNAS (Cai et al., 2019) and ENAS (Pham et al., 2018).
+Methods for dense prediction tasks (e.g., Bender et al. (2020); Chen et al. (2019b);
+Guo et al. (2020a); Shaw et al. (2019); Wu et al. (2019a)) typically build search spaces
+based on state-of-the-art image classification networks, with task-specific components from
+well-performing dense prediction architecture components.
+As many approaches fix the
+backbone and only search for other task-specific components of the architecture, they often
+employ pre-trained backbone architectures (Chen et al., 2020; Guo et al., 2020a) or even
+cache the features generated by a backbone (Chen et al., 2018; Nekrasov et al., 2019;
+Wang et al., 2020c) to speed up architecture search.
+Chen et al. (2018); Ghiasi et al.
+(2019) also use a down-scaled or different backbone architecture during the search process.
+Methods also sometimes employ multiple search stages, with the goal of first eliminating
+poorly performing architectures (or parts of the search space) and successively improving
+the remaining architectures (Du et al., 2020; Guo et al., 2020a).
+Overall, while it is much harder to run NAS on dense prediction tasks compared to
+image classification tasks because of the computational demands of dense prediction, there
+has been a rapid increase in developments with the rise of computationally efficient one-shot
+NAS methods. While efforts thus far have focused on semantic segmentation and object
+detection, avenues for future work include disparity estimation, panoptic segmentation, 3D
+detection and segmentation, and optical flow estimation.
+30
+
+Neural Architecture Search: Insights from 1000 Papers
+7.4 Transformers
+Transformers were proposed by Vaswani et al. (2017) to help with the issue of longer se-
+quences that RNNs had difficulty modeling, by using self-attention and cross-attention
+mechanisms such that each token’s representation in an input sequence is computed from a
+weighted average of the representation of all other tokens. The core transformer design was
+introduced for machine translation, but it has found widespread usage in causal language
+modeling (Brown et al., 2020; Radford et al., 2019), masked language modeling (Clark et al.,
+2020; Devlin et al., 2019; Liu et al., 2019d), and more recently, computer vision (Dosovitskiy
+et al., 2021; Liu et al., 2021b). Since its release, there have been many efforts to improve
+transformers via NAS. The most common search strategies for transformers are evolutionary
+(Chen et al., 2021c; So et al., 2019, 2021) or one-shot (Ding et al., 2021a; Gong et al., 2021;
+Li et al., 2021a; Su et al., 2021) On the other hand, there is a huge variety of different search
+spaces that have been tried recently, relative to other areas (e.g., in NAS for convolutional
+architectures, the majority of works use cell-based search spaces). Overall, the field of NAS
+for transformers has not converged to one “best” type of search space. Below, we survey
+NAS methods for four types of transformers: decoder-only, encoder-only, encoder-decoder,
+and vision transformers. See Chitty-Venkata et al. (2022) for an in-depth survey.
+Decoder-only architectures, such as the GPT line of architectures (Brown et al., 2020;
+Radford et al., 2019) directly consume the input text prompt and output the sequence of
+text tokens that are most likely to follow. Primer (So et al., 2021) is a NAS algorithm
+that makes use of evolutionary search on a large macro decoder-only search space. The
+approach found two consistent improvements to the transformer block: squaring the ReLU
+in the feedforward block in the transformer layer, and adding depthwise convolutions after
+self-attention heads.
+Encoder-only architectures, such as BERT (Devlin et al., 2019) encode the input text
+into a representation which can be used for many kinds of downstream tasks. Multiple works
+(Xu et al., 2021a, 2022; Yin et al., 2021) seek to discover compressed versions of BERT, in
+which the desired latency and task are specified by the user. The typical approach is to
+train a supernet on a standard self-supervised task (masked language modeling), which can
+then be used to discover compressed models for a given language task.
+Encoder-decoder architectures such as T5 (Raffel et al., 2020) are used in sequence-
+to-sequence tasks such as machine translation, in which the source language is encoded
+into a representation, which is then decoded into the target language. So et al. (2019) use
+evolutionary search together with a new technique to dynamically allocate more resources
+to more promising candidate models, while Zhao et al. (2021b) propose a DARTS-based
+algorithm with a new technique for memory efficiency in backpropagation. Finally, KNAS
+(Xu et al., 2021b) and SemiNAS (Luo et al., 2020) speed up the search using zero-cost
+proxies and a surrogate transformer model, respectively.
+A large variety of NAS algorithms have been studied for vision transformer search spaces,
+with the majority using one-shot methods. AutoFormer (Chen et al., 2021c) searches over
+vision transformer architectures and hyperparameters using a single-path-one-shot strategy
+(Guo et al., 2020b) and then running evolutionary search on the trained supernet.
+A
+followup work, AutoFormerv2 (Chen et al., 2021d), automated the design of the search
+space itself by gradually evolving different search dimensions. Other works have improved
+31
+
+White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter
+supernet training via gradient conflict aware training (Gong et al., 2021) or channel-aware
+training (Su et al., 2021). Finally, Li et al. (2021a) and Ding et al. (2021a) run one-shot
+methods on hybrid CNN and transformer search spaces for computer vision.
+8. Benchmarks
+In the early days of NAS research, the most popular metrics were the final test accuracies
+on CIFAR-10 and ImageNet. This caused inconsistent search spaces and training pipelines
+across papers, and also drove up computational costs. For example, it became standard
+to train the final architecture for 600 epochs, even though the test accuracy only increases
+by a fraction of a percent past 200 epochs. Recently, queryable NAS benchmarks have
+helped the field reduce computation when developing NAS techniques and to achieve fair,
+statistically significant comparisons between methods.
+A NAS benchmark (Lindauer and Hutter, 2020) is defined as a dataset with a fixed
+train-test split, a search space, and a fixed evaluation pipeline for training the architectures.
+A tabular NAS benchmark is one that additionally gives precomputed evaluations for all
+possible architectures in the search space. A surrogate NAS benchmark is a NAS benchmark
+along with a surrogate model that can be used to predict the performance of any architecture
+in the search space. A NAS benchmark is queryable if it is either a tabular or a surrogate
+benchmark. Queryable NAS benchmarks can be used to efficiently simulate many NAS
+experiments using only a CPU, by querying the performance of neural networks from the
+benchmark, rather than training them from scratch. In the rest of the section, we give an
+overview of popular NAS benchmarks. See Appendix Table 2 for a summary.
+The first tabular NAS benchmark was NAS-Bench-101 (Ying et al., 2019). It consists
+of a cell-based search space of 423 624 architectures, each with precomputed validation
+and test accuracies on CIFAR-10 for three different seeds. A follow-up work, NAS-Bench-
+1Shot1 (Zela et al., 2020b), is able to simulate one-shot algorithms by defining subsets of the
+NAS-Bench-101 search space which have a fixed number of nodes. NAS-Bench-201 (Dong
+and Yang, 2020) is another popular tabular NAS benchmark, consisting of 6466 unique
+architectures, each with precomputed validation and test accuracies on CIFAR-10, CIFAR-
+100, and ImageNet-16-120 for three seeds each. NATS-Bench (Dong et al., 2021b) is an
+extension of NAS-Bench-201 which also includes a macro search space. Another extension,
+HW-NAS-Bench-201 (Li et al., 2021b), gives the measured or estimated hardware cost for
+all architectures across six hardware devices.
+Surr-NAS-Bench-DARTS (formerly called NAS-Bench-301) (Siems et al., 2020) was the
+first surrogate NAS benchmark, created by training 60 000 architecture from the DARTS
+(Liu et al., 2019c) search space on CIFAR-10 and then training a surrogate model. The
+authors also released Surr-NAS-Bench-FBNet for the FBNet search space (Wu et al., 2019b).
+A follow-up work, NAS-Bench-x11 (Yan et al., 2021b), devised a technique to predict the
+full learning curve, allowing the validation accuracies to be queried at arbitrary epochs,
+which is necessary for simulating multi-fidelity NAS algorithms.
+TransNAS-Bench-101 (Duan et al., 2021) is a tabular benchmark that covers seven dif-
+ferent computer vision tasks from the Taskonomy dataset (Zamir et al., 2018). Beyond
+computer vision, NAS-Bench-NLP (Klyuchnikov et al., 2022) consists of an LSTM-inspired
+search space for NLP, and NAS-Bench-ASR (Mehrotra et al., 2021) is a tabular NAS bench-
+32
+
+Neural Architecture Search: Insights from 1000 Papers
+mark for automatic speech recognition (Garofolo, 1993). NAS-Bench-360 (Tu et al., 2022a)
+is a benchmark suite which gives NAS benchmarks on ten diverse problems such as pros-
+thetics control, PDE solving, protein folding, and astronomy imaging, and is search space
+agnostic, although three of the tasks have pretrained architectures on the NAS-Bench-201
+search space. Finally, NAS-Bench-Suite (Mehta et al., 2022) is a benchmark suite which
+combines the majority of existing queryable NAS benchmarks, 28 total tasks, into a single
+unified interface. An extension, NAS-Bench-Suite-Zero, offers precomputed zero-cost proxy
+values across all tasks (Krishnakumar et al., 2022).
+Using queryable benchmarks allows researchers to easily simulate hundreds of trials of
+the algorithms with different initial random seeds, making it easy to report statistically
+significant comparisons. However, over-reliance on a few benchmarks can lead to the field
+over-fitting (Koch et al., 2021; Raji et al., 2021) and is not conducive to the discovery of truly
+novel methods. Therefore, researchers should use a large set of diverse NAS benchmarks
+whenever possible.
+9. Best Practices
+The field of NAS has at times seen problems with reproducibility and fair, statistically
+significant comparisons among methods. These issues impede the overall research progress
+in the field of NAS. Recently, a few papers have laid out best practices and guidelines for
+conducting sound NAS research that is reproducible and makes fair comparisons (Li and
+Talwalkar, 2019; Lindauer and Hutter, 2020; Yang et al., 2020). These best practices are
+also available as a checklist (Lindauer and Hutter, 2020). We encourage NAS researchers
+to follow the checklist and to attach it to the appendix of their papers. Now, we summarize
+these best practices for NAS research.
+9.1 Releasing Code and Important Details
+It is nearly impossible to reproduce NAS methods without the full code. Even then, random
+seeds should be specified and reported. Furthermore, releasing easy-to-use code can lead to
+more follow-up methods and impact. For example, Liu et al. (2019c) released easy-to-use
+code for DARTS, which facilitated numerous follow-up works.
+When releasing code, it is important to release all components, including the training
+pipeline(s), search space, hyperparameters, random seeds, and the NAS method. Many
+papers use different architecture training pipelines during the search and during the final
+evaluation, so it is important to include both. Note that using popular NAS benchmarks
+such as NAS-Bench-101 or NAS-Bench-201 (see Section 8) makes this substantially easier:
+the training pipeline is already fixed.
+NAS methods often have several moving parts. As a result, they typically have many
+hyperparameters of their own that could be tuned. In fact, many NAS methods themselves
+make use of neural networks – one could even run a NAS algorithm on the NAS algorithm!
+Due to this complexity, it is important to report if, or how, these hyperparameters were
+tuned. When reporting results on a large set of search spaces and datasets, the best practice
+is to tune the hyperparameters of the NAS method on one dataset, and then fix these
+hyperparameters for the remaining evaluations on other datasets. We also note that, in
+general, devising NAS methods with fewer hyperparameters is more desirable, especially
+33
+
+White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter
+because it has recently been shown that hyperparameters often do not transfer well across
+datasets and search spaces (Mehta et al., 2022).
+9.2 Comparing NAS Methods
+When comparing NAS methods, it is not enough to use the same datasets. The exact same
+NAS benchmarks must be used: a dataset with a fixed train-test split, search space, and
+evaluation pipeline. Otherwise, it is unclear whether a difference in performance is due to
+the NAS algorithm or the training pipeline.
+Several papers have shown that simple baselines are competitive with state-of-the-art
+NAS algorithms (Li and Talwalkar, 2019; Ottelander et al., 2021; Sciuto et al., 2020; White
+et al., 2021b). When desigining a new method for NAS, it is important to compare the
+method with baselines such as random sampling and random search. Furthermore, many
+NAS methods are anytime algorithms: a time budget does not necessarily need to be spec-
+ified upfront, and the method can be stopped at any time, returning the best architecture
+found so far. The longer the NAS method runs, the better the final result. These NAS
+methods should be compared on a plot of performance over time. Even one-shot algorithms
+can be compared in this way, since the supernet can be discretized and trained at any point.
+We recommend that NAS researchers run thorough ablation studies to show which
+part(s) of the NAS method lead to the most improved performance. As mentioned in the
+previous section, NAS methods often have several moving parts, so a clean understanding of
+the importance of each part and how they work together, is important to report. Finally, we
+recommend that researchers run multiple trials of their experiments and report the random
+seeds for each experiment. NAS methods can have high variance in the randomness of the
+algorithm, so running many trials is important to verify statistically significant comparisons.
+10. Resources
+In this section, we discuss NAS resources including libraries (Section 10.1), other survey
+papers (Section 10.2), and additional resources (Section 10.3).
+10.1 Libraries
+A long line of engineering has been focused on automating machine learning pipelines: Auto-
+WEKA (Thornton et al., 2013), Auto-Sklearn (Feurer et al., 2015), TPOT (Olson et al.,
+2016), and AutoGluon-Tabular (Erickson et al., 2020). More recently, a special focus has
+been given to developing tools that can facilitate the deployment of various NAS algorithms
+for practitioners, such as Auto-Keras (Jin et al., 2019a), Auto-PyTorch Tabular (Zimmer
+et al., 2021), AutoGluon (Erickson et al., 2020), and NNI (Microsoft, 2021).
+To provide a toolbox for facilitating NAS research, in both developing new NAS meth-
+ods and applying NAS to new problem domains, various libraries have been proposed. The
+DeepArchitect library (Negrinho and Gordon, 2017), which separates the search space from
+the optimizer, was an important first step towards this direction in the NAS community.
+NASLib (Ruchte et al., 2020) unifies and simplifies NAS research by having a single ab-
+straction for one-shot and BBO algorithms, and a single abstraction for the search spaces
+of nearly all queryable NAS benchmark. Archai (Hu et al., 2019) also provides unified ab-
+34
+
+Neural Architecture Search: Insights from 1000 Papers
+stractions for one-shot and discrete NAS algorithms. The aim for Archai is both to support
+reproducible rapid prototyping for NAS research as well as to be a turnkey solution for
+data scientists looking to try NAS on their tasks. PyGlove (Peng et al., 2020) introduced a
+novel approach to constructing NAS methods via symbolic programming, in which the ML
+programs are mutable and can be manipulated and processed by other programs.
+10.2 Other NAS Survey Papers
+There are several older NAS survey papers.
+Elsken et al. (2019b) provides a compact
+introduction to NAS and introduces the “three pillars” of NAS: search space, search strategy,
+and performance evaluation strategy. The survey by Wistuba et al. (2019) provides a more
+comprehensive view of the landscape of NAS research, unifying and categorizing existing
+methods. Ren et al. (2020) gave a layout that focused on the historical challenges in the
+field of NAS, as well as the solutions found to remedy these challenges.
+Other surveys have been released which focus on a specific sub-area of NAS. Liu et al.
+(2021a) focus on evolutionary NAS, Benmeziane et al. (2021) focus on hardware-aware NAS
+(HW-NAS), Zhang et al. (2021b) survey AutoML (with a NAS focus) on graphs, Elsken
+et al. (2022) survey NAS for dense prediction in computer vision, and Xie et al. (2021),
+Santra et al. (2021), and Cha et al. (2022) all survey one-shot NAS methods.
+Finally, there are more survey papers with a broader focus such as automated machine
+learning (AutoML) or automated deep learning (AutoDL), which devote a section to NAS
+(Dong et al., 2021a; He et al., 2021; Kedziora et al., 2020; Yao et al., 2018; Yu and Zhu,
+2020). Notably, the first book on automated machine learning (which is open-access) was
+released in May 2019 by Hutter et al. (2019).
+10.3 Additional Resources
+There are multiple long-running workshops which focus on NAS and related topics. The
+AutoML workshop at ICML (2014-2021) and Meta-Learning workshop at NeurIPS (2017-
+2022) have had a healthy overlap in attendance with the NAS community, especially over
+the last few years, while ICLR (2020, 2021) and CVPR (2021) have had workshops devoted
+solely to NAS. Finally, after many years of AutoML and NAS workshops, the community
+has grown large enough to start the first AutoML conference: https://automl.cc/.
+For a continuously updated, searchable list of NAS papers, see https://www.automl.
+org/automl/literature-on-neural-architecture-search/. For a continuously updated
+list of NAS papers published at ML venues, as well as other resources, see https://github.
+com/D-X-Y/Awesome-AutoDL.
+11. Future Directions
+Neural architecture search has come a long way in the last few years. The efficiency of NAS
+algorithms has improved by orders of magnitude, tools exist to compare NAS algorithms
+without GPUs, and researchers have created many novel techniques and diverse search
+spaces. Architectures discovered by NAS constitute the state of the art on many tasks.
+However, there are still many unsolved problems and promising future directions. In this
+section, we discuss a few of the most important directions for future work in NAS.
+35
+
+White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter
+11.1 Robustness of Efficient Methods
+One-shot methods are one of the most popular techniques for NAS due to their orders-of-
+magnitude speedups over to black-box optimization techniques. While one-shot techniques
+have already seen major progress, they still face performance issues.
+Even though many improvements of one-shot algorithms such as DARTS have been
+proposed (see Section 4.2), these works generally focus on a single improvement; the field
+lacks a large-scale, fair comparison among one-shot methods. Furthermore, as it currently
+stands, applying one-shot methods to a new task requires a significant amount of expertise.
+Devising one-shot approaches that work robustly and reliably across new datasets and tasks
+is an important area for future study.
+Another more recent set of techniques that promises orders-of-magnitude speedups are
+zero-cost proxies (see Section 5.1.2). Although recent work has shown that many zero-cost
+proxies do not consistently outperform simple baselines (Ning et al., 2021), other work ar-
+gues that there is untapped potential for zero-cost proxies (White et al., 2022), especially
+when combined with existing NAS techniques (White et al., 2021c; Xiang et al., 2021). De-
+veloping a better understanding of when and why zero-cost proxies work in certain settings
+is an important area for future research.
+11.2 Going Beyond Hand-Crafted, Rigid Search Spaces
+The search spaces for NAS methods are typically carefully hand-designed by human experts.
+While carefully designing search spaces decreases search times, it also contradicts the idea
+of having an automated system that can be employed by non-experts, and it limits the
+scope of NAS to domains where strong search spaces are available. Furthermore, in the
+last few years, the most-studied type of search space by far has been the cell-based search
+space, which is significantly more rigid than other types of search spaces.
+Hierarchical search spaces offer a better trade-off between flexibility and ease of search,
+yet they are relatively under-explored when compared to cell-based search spaces (see Sec-
+tion 2.5). Furthermore, hierarchical search spaces by nature have a higher diversity when
+compared to cell-based search spaces, reducing the overall human bias of the search space.
+Optimizing search spaces in an automated manner (Ru et al., 2020b) such as starting
+with large, diverse search spaces and then iteratively pruning low-performing parts of the
+space (Guo et al., 2020a; Radosavovic et al., 2020) could allow researchers to consider a
+significantly larger variety of architectures.
+11.3 Fully Automated Deep Learning
+Although NAS has seen a huge amount of interest, recent work has shown that on popular
+search spaces such as the DARTS search space, optimizing the training hyperparameters
+leads to a greater increase in performance than optimizing the architecture (Yang et al.,
+2020; Zela et al., 2020b). While these results show that for some search spaces, optimizing
+hyperparameters may be more important than optimizing the architecture, the best case
+scenario is to optimize both hyperparameters and the architecture simultaneously.
+A new thread of research seeks to simultaneously optimize the hyperparameters and
+architecture: NAS + HPO (see Section 6.1).
+Varying hyperparameters along with the
+36
+
+Neural Architecture Search: Insights from 1000 Papers
+architecture also significantly reduces human bias, making it possible to discover previously
+unknown combinations of architectures and hyperparameters that substantially outperform
+existing methods. Therefore, while this problem is significantly more challenging than NAS
+or HPO alone, the potential improvements are much higher.
+Furthermore, we do not need to stop just at NAS + HPO: we can optimize the full
+deep learning pipeline, including problem formulation, data processing, data augmentation,
+model deployment, and continuous monitoring. In other words, the goal is to run fully auto-
+mated deep learning (AutoDL) (Dong et al., 2021a). As the field of NAS matures, AutoDL
+has the potential to play a big role in realizing substantial improvements in performance
+for real-world problems.
+Acknowledgments and Disclosure of Funding
+This research was partially supported by TAILOR, a project funded by EU Horizon 2020
+research and innovation programme under GA No 952215. We acknowledge funding by
+European Research Council (ERC) Consolidator Grant “Deep Learning 2.0” (grant no.
+101045765). Funded by the European Union. Views and opinions expressed are however
+those of the author(s) only and do not necessarily reflect those of the European Union or
+the ERC. Neither the European Union nor the ERC can be held responsible for them.
+37
+
+Fundedby
+the European UnionWhite, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter
+Operation Layer/Unit/Primitive
+3x3 depthwise-
+separable convolution
+Inverted bottleneck
+residual layer
+1x1 convolution
+Block/Module
+Architecture
+Cell
+Motif
+hi+1
+hi-1
+hi
+Input
+Block/Cell 1
+Output
+Block/Cell 2
+Block/Cell K
+...
+hi
+hi+1
+Figure 9: NAS search space terminology. Operation layers/units/primitives consist of sets
+of 1-3 operations. A block/module denotes a sequential stack of layers in chain-
+structured or macro search spaces. A cell denotes a directed acyclic graph of
+operations (and a motif denotes a small subset of the cell).
+Architecture
+Input
+CNN Block
+Output
+CNN Block
+CNN Block
+CNN Block
+CNN Block
+hi
+op. layer
+op. layer
+op. layer
+op. layer
+hi+1
+Block Depth Nlayers
+Expansion Ratio
+Kernel size Ratio
+Chain-Structured Search Space
+Where to
+Doubling
+Channels
+Macro Search Space
+Architecture Depth
+Nblocks
+Architecture
+Input
+Output
+op. layer
+op. layer
+op. layer
+op. layer
+op. layer
+op. layer
+op. layer
+op. layer
+Where to
+Down-
+sampling
+Figure 10: Illustration of macro search space based on Borsos et al. (2019)(left) and chain-
+structured search space based on Cai et al. (2020)(right).
+A. Additional Figures and Tables
+For a visualization of the search space terminologies, see Figure 9. In Figure 10, we show
+chain-structured and macro search spaces. Architecture encodings are illustrated in Figure
+11. Finally, for an overview of NAS benchmarks, see Table 2.
+38
+
+Neural Architecture Search: Insights from 1000 Papers
+in
+1x1
+out
+3x3
+in
+1x1
+out
+in
+out
+MP
+1x1
+in
+out
+3x3
+MP
+1x1
+...
+One-hot
+6
+4
+1
+47
+Categorical
+(a)
+(c)
+(b)
+in
+1x1
+out
+3x3
+3x3
+MP
+1x1
+...
++
+...
+3x3
+MP
+1x1
+1x1
+3
+2
+1
+... 21
+in
+MP
+3x3
+1x1
+3x3
+1x1
+out
+in
+MP 3x3 1x1 3x3 1x1 out
+9
++
+...
+3x3
+MP
+1x1
+1x1
+One-hot
+Categorical
+Figure 11: A neural architecture (a) can be encoded using an adjacency matrix (b) or
+path-based representation (c), with a one-hot or categorical encoding.
+Queryable
+Benchmark
+Size
+Type
+Tab.
+Surr.
+LCs
+One-Shot
+Task
+#Tasks
+NAS-Bench-101
+423k
+cell
+
+Image class.
+1
+NATS-Bench-TSS
+(NAS-Bench-201)
+6k
+cell
+
+
+
+Image class.
+3
+NATS-Bench-SSS
+32k
+macro
+
+
+
+Image class.
+3
+NAS-Bench-NLP
+> 1053
+cell
+
+NLP
+1
+NAS-Bench-1Shot1
+364k
+cell
+
+
+Image class.
+1
+Surr-NAS-Bench-DARTS
+(NAS-Bench-301)
+1018
+cell
+
+
+Image class.
+1
+Surr-NAS-Bench-FBNet
+1021
+chain
+
+Image class.
+1
+NAS-Bench-ASR
+8k
+cell
+
+
+ASR
+1
+TransNAS-Bench-101-Micro
+4k
+cell
+
+
+
+Var. CV
+7
+TransNAS-Bench-101-Macro
+3k
+macro
+
+
+
+Var. CV
+7
+NAS-Bench-111
+423k
+cell
+
+
+Image class.
+1
+NAS-Bench-311
+1018
+cell
+
+
+
+Image class.
+1
+NAS-Bench-NLP11
+> 1053
+cell
+
+
+NLP
+1
+NAS-Bench-MR
+1023
+cell
+
+
+Var. CV
+9
+NAS-Bench-Macro
+6k
+macro
+
+
+Image class.
+1
+HW-NAS-Bench-201
+6k
+cell
+
+Image class.
+3
+HW-NAS-Bench-FBNet
+1021
+chain
+
+Image class.
+1
+NAS-Bench-360
+Var.
+suite
+
+
+
+Var.
+3
+NAS-Bench-Suite
+Var.
+suite
+
+
+
+
+Var.
+25
+NAS-Bench-Suite-Zero
+Var.
+suite
+
+
+
+
+Var.
+28
+Table 2: An overview of NAS benchmarks.
+39
+
+White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter
+References
+Mohamed S Abdelfattah, Abhinav Mehrotra, �Lukasz Dudziak, and Nicholas Donald Lane.
+Zero-cost proxies for lightweight {nas}. In Proceedings of the International Conference
+on Learning Representations (ICLR), 2021.
+Abdulaziz Almalaq and Jun Jason Zhang. Evolutionary deep learning-based energy con-
+sumption prediction for buildings. ieee access, 7:1520–1531, 2018.
+Peter J Angeline, Gregory M Saunders, and Jordan B Pollack. An evolutionary algorithm
+that constructs recurrent neural networks. IEEE transactions on Neural Networks, 5(1):
+54–65, 1994.
+Randy Ardywibowo, Shahin Boluki, Xinyu Gong, Zhangyang Wang, and Xiaoning Qian.
+Nads: Neural architecture distribution search for uncertainty awareness. In Proceedings
+of the International Conference on Machine Learning (ICML), pages 356–366. PMLR,
+2020.
+Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by
+jointly learning to align and translate. Proceedings of the International Conference on
+Learning Representations (ICLR), 2015. arXiv preprint arXiv:1409.0473.
+Bowen Baker, Otkrist Gupta, Nikhil Naik, and Ramesh Raskar. Designing neural network
+architectures using reinforcement learning. In Proceedings of the International Conference
+on Learning Representations (ICLR), 2017.
+Bowen Baker, Otkrist Gupta, Ramesh Raskar, and Nikhil Naik. Accelerating neural archi-
+tecture search using performance prediction. In Meta-Learning Workshop at NeurIPS,
+2018.
+Gabriel Bender, Pieter-Jan Kindermans, Barret Zoph, Vijay Vasudevan, and Quoc Le.
+Understanding and simplifying one-shot architecture search. In Proceedings of the Inter-
+national Conference on Machine Learning (ICML), 2018.
+Gabriel Bender, Hanxiao Liu, Bo Chen, Grace Chu, Shuyang Cheng, Pieter-Jan Kinder-
+mans, and Quoc V. Le. Can weight sharing outperform random architecture search?
+an investigation with tunas. In Proceedings of the IEEE/CVF Conference on Computer
+Vision and Pattern Recognition (CVPR), June 2020.
+Hadjer Benmeziane, Kaoutar El Maghraoui, Hamza Ouarnoughi, Smail Niar, Martin Wis-
+tuba, and Naigang Wang. A Comprehensive Survey on Hardware-Aware Neural Architec-
+ture Search. PhD thesis, LAMIH, Universit´e Polytechnique des Hauts-de-France, 2021.
+James S Bergstra, R´emi Bardenet, Yoshua Bengio, and Bal´azs K´egl. Algorithms for hyper-
+parameter optimization. In Proceedings of the Annual Conference on Neural Information
+Processing Systems (NeurIPS), 2011.
+Kaifeng Bi, Changping Hu, Lingxi Xie, Xin Chen, Longhui Wei, and Qi Tian. Stabilizing
+darts with amended gradient estimation on architectural parameters.
+arXiv preprint
+arXiv:1910.11831, 2019.
+40
+
+Neural Architecture Search: Insights from 1000 Papers
+Zal´an Borsos, Andrey Khorlin, and Andrea Gesmundo. Transfer nas: Knowledge trans-
+fer between search spaces with transformer agents. 6th ICML Workshop on Automated
+Machine Learning, arXiv preprint arXiv:1906.08102, 2019.
+Andrew Brock, Theo Lim, JM Ritchie, and Nick Weston. Smash: One-shot model archi-
+tecture search through hypernetworks. In Proceedings of the International Conference on
+Learning Representations (ICLR), 2018.
+Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla
+Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al.
+Language models are few-shot learners. Proceedings of the Annual Conference on Neural
+Information Processing Systems (NeurIPS), 33:1877–1901, 2020.
+Cameron B Browne, Edward Powley, Daniel Whitehouse, Simon M Lucas, Peter I Cowling,
+Philipp Rohlfshagen, Stephen Tavener, Diego Perez, Spyridon Samothrakis, and Simon
+Colton. A survey of monte carlo tree search methods. IEEE Transactions on Computa-
+tional Intelligence and AI in games, 4(1):1–43, 2012.
+Han Cai, Tianyao Chen, Weinan Zhang, Yong Yu, and Jun Wang. Efficient architecture
+search by network transformation. In Proceedings of the AAAI Conference on Artificial
+Intelligence (AAAI), 2018a.
+Han Cai, Jiacheng Yang, Weinan Zhang, Song Han, and Yong Yu. Path-Level Network
+Transformation for Efficient Architecture Search.
+In Proceedings of the International
+Conference on Machine Learning (ICML), 2018b.
+Han Cai, Ligeng Zhu, and Song Han.
+Proxylessnas: Direct neural architecture search
+on target task and hardware. Proceedings of the International Conference on Learning
+Representations (ICLR), 2019.
+Han Cai, Chuang Gan, Tianzhe Wang, Zhekai Zhang, and Song Han. Once-for-all: Train
+one network and specialize it for efficient deployment. In Proceedings of the International
+Conference on Learning Representations (ICLR), 2020.
+Stephen Cha, Taehyeon Kim, Hayeon Lee, and Se-Young Yun. Supernet in neural architec-
+ture search: A taxonomic survey. arXiv preprint arXiv:2204.03916, 2022.
+William Chan, Navdeep Jaitly, Quoc Le, and Oriol Vinyals. Listen, attend and spell: A
+neural network for large vocabulary conversational speech recognition. In 2016 IEEE in-
+ternational conference on acoustics, speech and signal processing (ICASSP), pages 4960–
+4964. IEEE, 2016.
+Bo Chen, Golnaz Ghiasi, Hanxiao Liu, Tsung-Yi Lin, Dmitry Kalenichenko, Hartwig Adam,
+and Quoc V. Le.
+Mnasfpn: Learning latency-aware pyramid architecture for object
+detection on mobile devices. In Proceedings of the IEEE/CVF Conference on Computer
+Vision and Pattern Recognition (CVPR), June 2020.
+Boyu Chen, Peixia Li, Chuming Li, Baopu Li, Lei Bai, Chen Lin, Ming Sun, Junjie Yan, and
+Wanli Ouyang. Glit: Neural architecture search for global and local image transformer.
+41
+
+White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter
+In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages
+12–21, 2021a.
+Hanlin Chen, Ming Lin, Xiuyu Sun, and Hao Li. NAS-bench-zero: A large scale dataset for
+understanding zero-shot neural architecture search, 2022. URL https://openreview.
+net/forum?id=hP-SILoczR.
+Liang-Chieh Chen, Maxwell Collins, Yukun Zhu, George Papandreou, Barret Zoph, Florian
+Schroff, Hartwig Adam, and Jon Shlens. Searching for efficient multi-scale architectures
+for dense image prediction. In Proceedings of the Annual Conference on Neural Informa-
+tion Processing Systems (NeurIPS), 2018.
+Minghao Chen, Houwen Peng, Jianlong Fu, and Haibin Ling.
+One-shot neural ensem-
+ble architecture search by diversity-guided search space shrinking.
+Proceedings of the
+IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages
+16525–16534, 2021b.
+Minghao Chen, Houwen Peng, Jianlong Fu, and Haibin Ling. Autoformer: Searching trans-
+formers for visual recognition. In Proceedings of the IEEE/CVF International Conference
+on Computer Vision, pages 12270–12280, 2021c.
+Minghao Chen, Kan Wu, Bolin Ni, Houwen Peng, Bei Liu, Jianlong Fu, Hongyang Chao,
+and Haibin Ling. Searching the search space of vision transformer. Proceedings of the
+Annual Conference on Neural Information Processing Systems (NeurIPS), 34, 2021d.
+Tianqi Chen, Ian J. Goodfellow, and Jonathon Shlens. Net2net: Accelerating learning via
+knowledge transfer. In Proceedings of the International Conference on Learning Repre-
+sentations (ICLR), 2016.
+Wuyang Chen, Xinyu Gong, and Zhangyang Wang. Neural architecture search on imagenet
+in four gpu hours: A theoretically inspired perspective. Proceedings of the International
+Conference on Learning Representations (ICLR), 2021e. arXiv preprint arXiv:2102.11535.
+Xiangning Chen and Cho-Jui Hsieh.
+Stabilizing differentiable architecture search via
+perturbation-based regularization.
+In Proceedings of the International Conference on
+Machine Learning (ICML), pages 1554–1565. PMLR, 2020.
+Xiangning Chen, Ruochen Wang, Minhao Cheng, Xiaocheng Tang, and Cho-Jui Hsieh. Dr-
+nas: Dirichlet neural architecture search. In Proceedings of the International Conference
+on Learning Representations (ICLR), 2021f.
+Xin Chen, Lingxi Xie, Jun Wu, and Qi Tian. Progressive differentiable architecture search:
+Bridging the depth gap between search and evaluation. In Proceedings of the IEEE/CVF
+International Conference on Computer Vision, pages 1294–1303, 2019a.
+Yukang Chen, Tong Yang, Xiangyu Zhang, Gaofeng Meng, Xinyu Xiao, and Jian Sun.
+Detnas: Backbone search for object detection. In Proceedings of the Annual Conference
+on Neural Information Processing Systems (NeurIPS), 2019b.
+42
+
+Neural Architecture Search: Insights from 1000 Papers
+Krishna Teja Chitty-Venkata, Murali Emani, Venkatram Vishwanath, and Arun K Somani.
+Neural architecture search for transformers: A survey. IEEE Access, 2022.
+Jan K Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, and Yoshua Ben-
+gio. Attention-based models for speech recognition. Proceedings of the Annual Conference
+on Neural Information Processing Systems (NeurIPS), 28, 2015.
+Aristeidis Chrostoforidis, George Kyriakides, and Konstantinos Margaritis.
+A novel
+evolutionary algorithm for hierarchical neural architecture search.
+arXiv preprint
+arXiv:2107.08484, 2021.
+Xiangxiang Chu, Tianbao Zhou, Bo Zhang, and Jixiang Li. Fair darts: Eliminating unfair
+advantages in differentiable architecture search.
+In European conference on computer
+vision, pages 465–480. Springer, 2020.
+Xiangxiang Chu, Xiaoxing Wang, Bo Zhang, Shun Lu, Xiaolin Wei, and Junchi Yan. Darts-
+: robustly stepping out of performance collapse without indicators. Proceedings of the
+International Conference on Learning Representations (ICLR), 2021.
+arXiv preprint
+arXiv:2009.01027.
+Kevin Clark, Minh-Thang Luong, Quoc V Le, and Christopher D Manning.
+Electra:
+Pre-training text encoders as discriminators rather than generators. Proceedings of the
+International Conference on Learning Representations (ICLR), 2020.
+arXiv preprint
+arXiv:2003.10555.
+Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. Binaryconnect: Training
+deep neural networks with binary weights during propagations. Advances in neural in-
+formation processing systems, 28, 2015.
+Dennis D Cox and Susan John. A statistical method for global optimization. In [Proceedings]
+1992 IEEE International Conference on Systems, Man, and Cybernetics, pages 1241–
+1246. IEEE, 1992.
+Xiaoliang Dai, Alvin Wan, Peizhao Zhang, Bichen Wu, Zijian He, Zhen Wei, Kan Chen,
+Yuandong Tian, Matthew Yu, Peter Vajda, et al. Fbnetv3: Joint architecture-recipe
+search using predictor pretraining. In Proceedings of the IEEE/CVF Conference on Com-
+puter Vision and Pattern Recognition (CVPR), pages 16276–16285, 2021.
+Tri Dao, Nimit Sohoni, Albert Gu, Matthew Eichhorn, Amit Blonder, Megan Leszczynski,
+Atri Rudra, and Christopher R´e. Kaleidoscope: An efficient, learnable representation for
+all structured linear maps. In Proceedings of the International Conference on Learning
+Representations (ICLR), 2020.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of
+deep bidirectional transformers for language understanding. In Proceedings of NAACL-
+HLT, 2019.
+Mingyu Ding, Xiaochen Lian, Linjie Yang, Peng Wang, Xiaojie Jin, Zhiwu Lu, and Ping
+Luo.
+Hr-nas: Searching efficient high-resolution neural architectures with lightweight
+43
+
+White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter
+transformers.
+In Proceedings of the IEEE/CVF Conference on Computer Vision and
+Pattern Recognition, pages 2982–2992, 2021a.
+Yuhui Ding, Quanming Yao, Huan Zhao, and Tong Zhang. Diffmg: Differentiable meta
+graph search for heterogeneous graph neural networks. In Proceedings of the 27th ACM
+SIGKDD Conference on Knowledge Discovery & Data Mining, pages 279–288, 2021b.
+Tobias Domhan, Jost Tobias Springenberg, and Frank Hutter.
+Speeding up automatic
+hyperparameter optimization of deep neural networks by extrapolation of learning curves.
+In The International Joint Conference on Artificial Intelligence (IJCAI), 2015.
+Xuanyi Dong and Yi Yang. Searching for a robust neural architecture in four gpu hours. In
+Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition
+(CVPR), 2019.
+Xuanyi Dong and Yi Yang. Nas-bench-201: Extending the scope of reproducible neural
+architecture search. In Proceedings of the International Conference on Learning Repre-
+sentations (ICLR), 2020.
+Xuanyi Dong, Mingxing Tan, Adams Wei Yu, Daiyi Peng, Bogdan Gabrys, and Quoc V
+Le.
+Autohas:
+Efficient hyperparameter and architecture search.
+arXiv preprint
+arXiv:2006.03656, 2020.
+Xuanyi Dong, David Jacob Kedziora, Katarzyna Musial, and Bogdan Gabrys. Automated
+deep learning: Neural architecture search is not the end. arXiv preprint arXiv:2112.09245,
+2021a.
+Xuanyi Dong, Lu Liu, Katarzyna Musial, and Bogdan Gabrys. Nats-bench: Benchmarking
+nas algorithms for architecture topology and size. IEEE Transactions on Pattern Analysis
+and Machine Intelligence, 2021b.
+Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai,
+Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain
+Gelly, et al.
+An image is worth 16x16 words: Transformers for image recognition at
+scale. Proceedings of the International Conference on Learning Representations (ICLR),
+2021. arXiv preprint arXiv:2010.11929.
+Sivan Doveh and Raja Giryes.
+Degas: differentiable efficient generator search.
+Neural
+Computing and Applications, 33(24):17173–17184, 2021.
+Xianzhi Du, Tsung-Yi Lin, Pengchong Jin, Golnaz Ghiasi, Mingxing Tan, Yin Cui, Quoc V.
+Le, and Xiaodan Song.
+Spinenet: Learning scale-permuted backbone for recognition
+and localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and
+Pattern Recognition (CVPR), June 2020.
+Yawen Duan, Xin Chen, Hang Xu, Zewei Chen, Xiaodan Liang, Tong Zhang, and Zhen-
+guo Li. Transnas-bench-101: Improving transferability and generalizability of cross-task
+neural architecture search. In Proceedings of the IEEE/CVF Conference on Computer
+Vision and Pattern Recognition (CVPR), pages 5251–5260, 2021.
+44
+
+Neural Architecture Search: Insights from 1000 Papers
+Lukasz Dudziak, Thomas Chau, Mohamed Abdelfattah, Royson Lee, Hyeji Kim, and
+Nicholas Lane. Brp-nas: Prediction-based nas using gcns. In Proceedings of the An-
+nual Conference on Neural Information Processing Systems (NeurIPS), 2020.
+Thomas Elsken, Jan-Hendrik Metzen, and Frank Hutter. Simple and efficient architecture
+search for convolutional neural networks. arXiv preprint arXiv:1711.04528, 2017.
+Thomas Elsken, Jan Hendrik Metzen, and Frank Hutter. Efficient multi-objective neural ar-
+chitecture search via lamarckian evolution. In Proceedings of the International Conference
+on Learning Representations (ICLR), 2019a.
+Thomas Elsken, Jan Hendrik Metzen, and Frank Hutter. Neural architecture search: A
+survey. In JMLR, 2019b.
+Thomas Elsken, Benedikt Staffler, Jan Hendrik Metzen, and Frank Hutter. Meta-learning
+of neural architectures for few-shot learning. In CVPR, 2020.
+Thomas Elsken, Arber Zela, Jan Hendrik Metzen, Benedikt Staffler, Thomas Brox, Abhi-
+nav Valada, and Frank Hutter. Neural architecture search for dense prediction tasks in
+computer vision, 2022.
+Nick Erickson, Jonas Mueller, Alexander Shirkov, Hang Zhang, Pedro Larroy, Mu Li, and
+Alexander Smola. Autogluon-tabular: Robust and accurate automl for structured data.
+arXiv preprint arXiv:2003.06505, 2020.
+Stefan Falkner, Aaron Klein, and Frank Hutter. Bohb: Robust and efficient hyperparameter
+optimization at scale. In Proceedings of the International Conference on Machine Learning
+(ICML), 2018.
+Jiemin Fang, Yuzhu Sun, Kangjian Peng, Qian Zhang, Yuan Li, Wenyu Liu, and Xing-
+gang Wang.
+Fast neural network adaptation via parameter remapping and architec-
+ture search. In Proceedings of the International Conference on Learning Representations
+(ICLR), 2020.
+M. Feurer, A. Klein, K. Eggensperger, J. T. Springenberg, M. Blum, and F. Hutter. Efficient
+and robust automated machine learning. In Proceedings of the Annual Conference on
+Neural Information Processing Systems (NeurIPS), pages 2962–2970, 2015.
+Matthias Feurer and Frank Hutter. Hyperparameter optimization. In Hutter et al. (2019),
+pages 3–38.
+Chelsea Finn, Pieter Abbeel, and Sergey Levine.
+Model-agnostic meta-learning for fast
+adaptation of deep networks. In Proceedings of the International Conference on Machine
+Learning (ICML), 2017.
+Dario Floreano, Peter D¨urr, and Claudio Mattiussi. Neuroevolution: from architectures to
+learning. Evolutionary intelligence, 1(1):47–62, 2008.
+Peter I Frazier. A tutorial on bayesian optimization. stat, 1050:8, 2018.
+45
+
+White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter
+Yonggan Fu, Wuyang Chen, Haotao Wang, Haoran Li, Yingyan Lin, and Zhangyang Wang.
+Autogan-distiller: searching to compress generative adversarial networks. In Proceedings
+of the International Conference on Machine Learning (ICML), pages 3292–3303, 2020.
+Saya Fujino, Naoki Mori, and Keinosuke Matsumoto.
+Deep convolutional networks for
+human sketches by means of the evolutionary deep learning. In 2017 Joint 17th World
+Congress of International Fuzzy Systems Association and 9th International Conference
+on Soft Computing and Intelligent Systems (IFSA-SCIS), pages 1–5. IEEE, 2017.
+Vayangi Vishmi Vishara Ganepola and Torin Wirasingha. Automating generative adversar-
+ial networks using neural architecture search: A review. In 2021 International Conference
+on Emerging Smart Computing and Informatics (ESCI), pages 577–582. IEEE, 2021.
+Chen Gao, Yunpeng Chen, Si Liu, Zhenxiong Tan, and Shuicheng Yan. Adversarialnas: Ad-
+versarial neural architecture search for gans. In Proceedings of the IEEE/CVF Conference
+on Computer Vision and Pattern Recognition (CVPR), pages 5680–5689, 2020a.
+Yang Gao, Hong Yang, Peng Zhang, Chuan Zhou, and Yue Hu. Graph neural architec-
+ture search. In The International Joint Conference on Artificial Intelligence (IJCAI),
+volume 20, pages 1403–1409, 2020b.
+Roman Garnett. Bayesian Optimization. Cambridge University Press, 2023. to appear.
+John S Garofolo. Timit acoustic phonetic continuous speech corpus. Linguistic Data Con-
+sortium, 1993, 1993.
+Golnaz Ghiasi, Tsung-Yi Lin, and Quoc V. Le. Nas-fpn: Learning scalable feature pyramid
+architecture for object detection.
+In The IEEE Conference on Computer Vision and
+Pattern Recognition (CVPR), June 2019.
+Spencer Gibb, Hung Manh La, and Sushil Louis. A genetic algorithm for convolutional
+network structure optimization for concrete crack detection. In 2018 IEEE Congress on
+Evolutionary Computation (CEC), pages 1–8. IEEE, 2018.
+R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate
+object detection and semantic segmentation. In 2014 IEEE Conference on Computer
+Vision and Pattern Recognition, pages 580–587, 2014.
+David E Goldberg and Kalyanmoy Deb. A comparative analysis of selection schemes used in
+genetic algorithms. In Foundations of genetic algorithms, volume 1, pages 69–93. Elsevier,
+1991.
+Chengyue Gong, Dilin Wang, Meng Li, Xinlei Chen, Zhicheng Yan, Yuandong Tian, Vikas
+Chandra, et al. Nasvit: Neural architecture search for efficient vision transformers with
+gradient conflict aware supernet training. In International Conference on Learning Rep-
+resentations, 2021.
+Xinyu Gong, Shiyu Chang, Yifan Jiang, and Zhangyang Wang. Autogan: Neural archi-
+tecture search for generative adversarial networks.
+In Proceedings of the IEEE/CVF
+International Conference on Computer Vision, pages 3224–3234, 2019.
+46
+
+Neural Architecture Search: Insights from 1000 Papers
+Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil
+Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. Proceedings of
+the Annual Conference on Neural Information Processing Systems (NeurIPS), 27, 2014.
+Li Guilin, Zhang Xing, Wang Zitong, Li Zhenguo, and Zhang Tong. Stacnas: Towards stable
+and consistent optimization for differentiable neural architecture search.
+Openreview
+submission https://openreview.net/forum?id=rygpAnEKDH, 2019.
+Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C
+Courville.
+Improved training of wasserstein gans. Proceedings of the Annual Confer-
+ence on Neural Information Processing Systems (NeurIPS), 30, 2017.
+Jianyuan Guo, Kai Han, Yunhe Wang, Chao Zhang, Zhaohui Yang, Han Wu, Xinghao
+Chen, and Chang Xu. Hit-detector: Hierarchical trinity architecture search for object
+detection. In The IEEE/CVF Conference on Computer Vision and Pattern Recognition
+(CVPR), June 2020a.
+Zichao Guo, Xiangyu Zhang, Haoyuan Mu, Wen Heng, Zechun Liu, Yichen Wei, and Jian
+Sun. Single path one-shot neural architecture search with uniform sampling. In European
+Conference on Computer Vision, pages 544–560. Springer, 2020b.
+David Ha, Andrew Dai, and Quoc V. Le. Hypernetworks. In Proceedings of the International
+Conference on Learning Representations (ICLR), 2017.
+Awni Hannun, Carl Case, Jared Casper, Bryan Catanzaro, Greg Diamos, Erich Elsen, Ryan
+Prenger, Sanjeev Satheesh, Shubho Sengupta, Adam Coates, et al. Deep speech: Scaling
+up end-to-end speech recognition. arXiv preprint arXiv:1412.5567, 2014.
+K. He, X. Zhang, S. Ren, and J. Sun.
+Spatial pyramid pooling in deep convolutional
+networks for visual recognition. IEEE Transactions on Pattern Analysis and Machine
+Intelligence, 37(9):1904–1916, 2015.
+Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.
+Deep residual learning for
+image recognition. In Proceedings of the IEEE conference on computer vision and pattern
+recognition, pages 770–778, 2016a.
+Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.
+Deep residual learning for
+image recognition. In Proceedings of the IEEE conference on computer vision and pattern
+recognition, pages 770–778, 2016b.
+Xin He, Kaiyong Zhao, and Xiaowen Chu.
+Automl: A survey of the state-of-the-art.
+Knowledge-Based Systems, 212:106622, 2021.
+Philipp Hennig and Christian J Schuler. Entropy search for information-efficient global
+optimization. Journal of Machine Learning Research, 13(Jun):1809–1837, 2012.
+Jos´e Miguel Hern´andez-Lobato, Matthew W Hoffman, and Zoubin Ghahramani. Predictive
+entropy search for efficient global optimization of black-box functions. In Proceedings
+of the Annual Conference on Neural Information Processing Systems (NeurIPS), pages
+918–926, 2014.
+47
+
+White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter
+Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp
+Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilib-
+rium. Proceedings of the Annual Conference on Neural Information Processing Systems
+(NeurIPS), 30, 2017.
+Sepp Hochreiter and J¨urgen Schmidhuber. Long short-term memory. Neural computation,
+9(8):1735–1780, 1997.
+Sepp Hochreiter, A. Steven Younger, and Peter R. Conwell. Learning to learn using gra-
+dient descent.
+In Georg Dorffner, Horst Bischof, and Kurt Hornik, editors, Artificial
+Neural Networks — ICANN 2001, pages 87–94, Berlin, Heidelberg, 2001. Springer Berlin
+Heidelberg.
+Noah Hollmann, Samuel M¨uller, Katharina Eggensperger, and Frank Hutter. Tabpfn: A
+transformer that solves small tabular classification problems in a second. arXiv preprint
+arXiv:2207.01848, 2022.
+T. M. Hospedales, A. Antoniou, P. Micaelli, and A. J. Storkey. Meta-learning in neural
+networks: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence,
+2021.
+Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, To-
+bias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient convolutional
+neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017.
+Hanzhang Hu, John Langford, Rich Caruana, Saurajit Mukherjee, Eric Horvitz, and De-
+badeepta Dey. Efficient forward architecture search. In Proceedings of the Annual Con-
+ference on Neural Information Processing Systems (NeurIPS), 2019.
+Shou-Yong Hu, Sirui Xie, Hehui Zheng, Chunxiao Liu, Jianping Shi, Xunying Liu, and
+Dahua Lin. Dsnas: Direct neural architecture search without parameter retraining. 2020
+IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages
+12081–12089, 2020.
+Gao Huang, Zhuang Liu, Laurens van der Maaten, and Kilian Q. Weinberger. Densely
+connected convolutional networks. In Proceedings of the IEEE Conference on Computer
+Vision and Pattern Recognition (CVPR), July 2017.
+Frank Hutter, Holger H. Hoos, and Kevin Leyton-Brown. Sequential model-based optimiza-
+tion for general algorithm configuration. In Proceedings of the 5th International Confer-
+ence on Learning and Intelligent Optimization, LION’05, page 507–523, Berlin, Heidel-
+berg, 2011. Springer-Verlag. ISBN 9783642255656. doi: 10.1007/978-3-642-25566-3 40.
+URL https://doi.org/10.1007/978-3-642-25566-3_40.
+Frank Hutter, Lars Kotthoff, and Joaquin Vanschoren, editors. Automated Machine Learn-
+ing: Methods, Systems, Challenges. Springer, 2019.
+Carl Hvarfner, Frank Hutter, and Luigi Nardi. Joint entropy search for maximally-informed
+bayesian optimization. In Proceedings of the Annual Conference on Neural Information
+Processing Systems (NeurIPS), 2022.
+48
+
+Neural Architecture Search: Insights from 1000 Papers
+Sergio Izquierdo, Julia Guerrero-Viu, Sven Hauns, Guilherme Miotto, Simon Schrodi, Andr´e
+Biedenkapp, Thomas Elsken, Difan Deng, Marius Lindauer, and Frank Hutter. Bag of
+baselines for multi-objective joint neural architecture search and hyperparameter opti-
+mization. In 8th ICML Workshop on Automated Machine Learning (AutoML), 2021.
+Arthur Jacot, Franck Gabriel, and Cl´ement Hongler. Neural tangent kernel: Convergence
+and generalization in neural networks. Proceedings of the Annual Conference on Neural
+Information Processing Systems (NeurIPS), 31, 2018.
+Kevin Jamieson and Ameet Talwalkar. Non-stochastic best arm identification and hyper-
+parameter optimization.
+In Proceedings of the International Conference on Artificial
+Intelligence and Statistics (AISTATS), 2016.
+Mojan Javaheripi, Shital Shah, Subhabrata Mukherjee, Tomasz Lukasz Religa, Caio Ce-
+sar Teodoro Mendes, Gustavo Henrique de Rosa, Sebastien Bubeck, Farinaz Koushanfar,
+and Debadeepta Dey. Litetransformersearch: Training-free on-device search for efficient
+autoregressive language models. In Proceedings of the Annual Conference on Neural In-
+formation Processing Systems (NeurIPS), 2022.
+Shengli Jiang and Prasanna Balaprakash. Graph neural network architecture search for
+molecular property prediction. In 2020 IEEE International Conference on Big Data (Big
+Data), pages 1346–1353. IEEE, 2020.
+Haifeng Jin, Qingquan Song, and Xia Hu. Auto-keras: An efficient neural architecture search
+system. Proceedings of the 25th ACM SIGKDD International Conference on Knowledge
+Discovery & Data Mining, 2019a.
+Haifeng Jin, Qingquan Song, and Xia Hu.
+Auto-keras: An efficient neural architecture
+search system. In Proceedings of the 25th ACM SIGKDD International Conference on
+Knowledge Discovery & Data Mining, pages 1946–1956. ACM, 2019b.
+Donald R Jones, Matthias Schonlau, and William J Welch. Efficient global optimization of
+expensive black-box functions. Journal of Global optimization, 13(4):455–492, 1998.
+Arlind Kadra, Marius Lindauer, Frank Hutter, and Josif Grabocka. Regularization is all
+you need: Simple neural nets can excel on tabular data. arXiv preprint arXiv:2106.11189,
+2021.
+Kirthevasan Kandasamy, Gautam Dasarathy, Jeff Schneider, and Barnab´as P´oczos. Multi-
+fidelity Bayesian optimisation with continuous approximations.
+In Proceedings of the
+International Conference on Machine Learning (ICML), 2017.
+Kirthevasan Kandasamy, Willie Neiswanger, Jeff Schneider, Barnabas Poczos, and Eric P
+Xing.
+Neural architecture search with bayesian optimisation and optimal transport.
+In Proceedings of the Annual Conference on Neural Information Processing Systems
+(NeurIPS), 2018.
+David Jacob Kedziora, Katarzyna Musial, and Bogdan Gabrys. Autonoml: Towards an
+integrated framework for autonomous machine learning. arXiv preprint arXiv:2012.12600,
+2020.
+49
+
+White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter
+Hiroaki Kitano. Designing neural networks using genetic algorithms with graph generation
+system. Complex systems, 4(4):461–476, 1990.
+Jyrki Kivinen and Manfred K Warmuth. Exponentiated gradient versus gradient descent
+for linear predictors. information and computation, 132, 1997.
+Aaron Klein, Stefan Falkner, Jost Tobias Springenberg, and Frank Hutter. Learning curve
+prediction with bayesian neural networks. In Proceedings of the International Conference
+on Learning Representations (ICLR), 2017.
+Aaron Klein, Louis Tiao, Thibaut Lienart, Cedric Archambeau, and Matthias Seeger.
+Model-based asynchronous hyperparameter and neural architecture search. arXiv preprint
+arXiv:2003.10865, 2020.
+Nikita Klyuchnikov, Ilya Trofimov, Ekaterina Artemova, Mikhail Salnikov, Maxim Fedorov,
+Alexander Filippov, and Evgeny Burnaev.
+Nas-bench-nlp: neural architecture search
+benchmark for natural language processing. IEEE Access, 10:45736–45747, 2022.
+Masayuki Kobayashi and Tomoharu Nagao. A multi-objective architecture search for gen-
+erative adversarial networks. In Proceedings of the 2020 Genetic and Evolutionary Com-
+putation Conference Companion, pages 133–134, 2020.
+Bernard Koch, Emily Denton, Alex Hanna, and Jacob G Foster.
+Reduced, reused and
+recycled: The life of a dataset in machine learning research. Proceedings of the Annual
+Conference on Neural Information Processing Systems (NeurIPS), 2021. arXiv preprint
+arXiv:2112.01716.
+Arjun Krishnakumar, Colin White, Arber Zela, Renbo Tu, Mahmoud Safari, and Frank
+Hutter. Nas-bench-suite-zero: Accelerating research on zero cost proxies. In Proceedings
+of the Annual Conference on Neural Information Processing Systems (NeurIPS), Datasets
+and Benchmarks Track, 2022.
+Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep
+convolutional neural networks. In Proceedings of the Annual Conference on Neural In-
+formation Processing Systems (NeurIPS), 2012.
+David Krueger, Chin-Wei Huang, Riashat Islam, Ryan Turner, Alexandre Lacoste, and
+Aaron Courville. Bayesian hypernetworks. arXiv preprint arXiv:1710.04759, 2017.
+Deepika Kumari and Kamaljit Kaur. A survey on stereo matching techniques for 3d vision
+in image processing. Int. J. Eng. Manuf, 4:40–49, 2016.
+Kevin Alexander Laube, Maximus Mutschler, and Andreas Zell. What to expect of hardware
+metric predictors in NAS, 2022. URL https://openreview.net/forum?id=2DJn3E7lXu.
+Yann LeCun, Patrick Haffner, L´eon Bottou, and Yoshua Bengio. Object recognition with
+gradient-based learning. In Shape, contour and grouping in computer vision, 1999.
+Hayeon Lee, Eunyoung Hyung, and Sung Ju Hwang. Rapid neural architecture search by
+learning to generate graphs from datasets. In Proceedings of the International Conference
+on Learning Representations (ICLR), 2021.
+50
+
+Neural Architecture Search: Insights from 1000 Papers
+Juho Lee, Yoonho Lee, Jungtaek Kim, Adam Kosiorek, Seungjin Choi, and Yee Whye Teh.
+Set transformer: A framework for attention-based permutation-invariant neural networks.
+In Proceedings of the International Conference on Machine Learning (ICML), 2019a.
+Namhoon Lee, Thalaiyasingam Ajanthan, and Philip Torr. Snip: Single-shot network prun-
+ing based on connection sensitivity. In Proceedings of the International Conference on
+Learning Representations (ICLR), 2019b.
+Changlin Li, Tao Tang, Guangrun Wang, Jiefeng Peng, Bing Wang, Xiaodan Liang, and
+Xiaojun Chang.
+Bossnas: Exploring hybrid cnn-transformers with block-wisely self-
+supervised neural architecture search. In Proceedings of the IEEE/CVF International
+Conference on Computer Vision, pages 12281–12291, 2021a.
+Chaojian Li, Zhongzhi Yu, Yonggan Fu, Yongan Zhang, Yang Zhao, Haoran You, Qixuan
+Yu, Yue Wang, Cong Hao, and Yingyan Lin. {HW}-{nas}-bench: Hardware-aware neu-
+ral architecture search benchmark. In Proceedings of the International Conference on
+Learning Representations (ICLR), 2021b.
+Guohao Li, Guocheng Qian, Itzel C Delgadillo, Matthias Muller, Ali Thabet, and Bernard
+Ghanem. Sgas: Sequential greedy architecture search. In Proceedings of the IEEE/CVF
+Conference on Computer Vision and Pattern Recognition (CVPR), pages 1620–1630,
+2020a.
+Jian Li, Yong Liu, Jiankun Liu, and Weiping Wang. Neural architecture optimization with
+graph vae. arXiv preprint arXiv:2006.10310, 2020b.
+Liam Li and Ameet Talwalkar. Random search and reproducibility for neural architecture
+search. In Uncertainty in Artificial Intelligence (UAI), 2019.
+Liam Li, Kevin Jamieson, Afshin Rostamizadeh, Ekaterina Gonina, Moritz Hardt, Benjamin
+Recht, and Ameet Talwalkar. A system for massively parallel hyperparameter tuning. In
+Proceedings of the Conference on Machine Learning Systems (MLSys), 2020c.
+Liam Li, Mikhail Khodak, Maria-Florina Balcan, and Ameet Talwalkar. Geometry-aware
+gradient algorithms for neural architecture search. In Proceedings of the International
+Conference on Learning Representations (ICLR), 2021c.
+Lisha Li, Kevin Jamieson, Giulia DeSalvo, Afshin Rostamizadeh, and Ameet Talwalkar.
+Hyperband: A novel bandit-based approach to hyperparameter optimization. In JMLR,
+2018.
+Yuhong Li, Cong Hao, Pan Li, Jinjun Xiong, and Deming Chen. Generic neural architec-
+ture search via regression. Proceedings of the Annual Conference on Neural Information
+Processing Systems (NeurIPS), 34:20476–20490, 2021d.
+Dongze Lian, Yin Zheng, Yintao Xu, Yanxiong Lu, Leyu Lin, Peilin Zhao, Junzhou Huang,
+and Shenghua Gao. Towards fast adaptation of neural architectures with meta learning. In
+Proceedings of the International Conference on Learning Representations (ICLR), 2020.
+51
+
+White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter
+Hanwen Liang, Shifeng Zhang, Jiacheng Sun, Xingqiu He, Weiran Huang, Kechen Zhuang,
+and Zhenguo Li. Darts+: Improved differentiable architecture search with early stopping.
+arXiv preprint arXiv:1909.06035, 2019.
+Ming Lin, Pichao Wang, Zhenhong Sun, Hesen Chen, Xiuyu Sun, Qi Qian, Hao Li, and Rong
+Jin. Zen-nas: A zero-shot nas for high-performance image recognition. In Proceedings of
+the IEEE/CVF International Conference on Computer Vision, pages 347–356, 2021.
+Marius Lindauer and Frank Hutter. Best practices for scientific research on neural archi-
+tecture search. In JMLR, 2020.
+Marius Lindauer, Katharina Eggensperger, Matthias Feurer, Andr´e Biedenkapp, Difan
+Deng, Carolin Benjamins, Tim Ruhkopf, Ren´e Sass, and Frank Hutter. Smac3: A versa-
+tile bayesian optimization package for hyperparameter optimization. Journal of Machine
+Learning Research, 2022.
+Chenxi Liu, Barret Zoph, Maxim Neumann, Jonathon Shlens, Wei Hua, Li-Jia Li, Li Fei-
+Fei, Alan Yuille, Jonathan Huang, and Kevin Murphy. Progressive neural architecture
+search. In Proceedings of the European Conference on Computer Vision (ECCV), pages
+19–34, 2018a.
+Chenxi Liu, Liang-Chieh Chen, Florian Schroff, Hartwig Adam, Wei Hua, Alan L. Yuille,
+and Li Fei-Fei. Auto-deeplab: Hierarchical neural architecture search for semantic image
+segmentation. In The IEEE Conference on Computer Vision and Pattern Recognition
+(CVPR), June 2019a.
+Chenxi Liu, Liang-Chieh Chen, Florian Schroff, Hartwig Adam, Wei Hua, Alan L Yuille,
+and Li Fei-Fei. Auto-deeplab: Hierarchical neural architecture search for semantic image
+segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and
+Pattern Recognition (CVPR), 2019b.
+Hanxiao
+Liu,
+Karen
+Simonyan,
+Oriol
+Vinyals,
+Chrisantha
+Fernando,
+and
+Koray
+Kavukcuoglu. Hierarchical representations for efficient architecture search. In Proceedings
+of the International Conference on Learning Representations (ICLR), 2018b.
+Hanxiao Liu, Karen Simonyan, and Yiming Yang. Darts: Differentiable architecture search.
+In Proceedings of the International Conference on Learning Representations (ICLR),
+2019c.
+Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy,
+Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized
+bert pretraining approach, 2019d.
+Yuqiao Liu, Yanan Sun, Bing Xue, Mengjie Zhang, Gary G Yen, and Kay Chen Tan. A
+survey on evolutionary neural architecture search. IEEE Transactions on Neural Networks
+and Learning Systems, 2021a.
+Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and
+Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows.
+52
+
+Neural Architecture Search: Insights from 1000 Papers
+In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages
+10012–10022, 2021b.
+Mohammad Loni, Sima Sinaei, Ali Zoljodi, Masoud Daneshtalab, and Mikael Sj¨odin. Deep-
+maker: A multi-objective optimization framework for deep neural networks in embedded
+systems. Microprocessors and Microsystems, 73:102989, 2020.
+Zhichao Lu, Ian Whalen, Vishnu Boddeti, Yashesh Dhebar, Kalyanmoy Deb, Erik Good-
+man, and Wolfgang Banzhaf. Nsga-net: Neural architecture search using multi-objective
+genetic algorithm. In Proceedings of the Genetic and Evolutionary Computation Confer-
+ence (GECCO), 2019.
+Zhichao Lu, Kalyanmoy Deb, Erik Goodman, Wolfgang Banzhaf, and Vishnu Naresh Bod-
+deti.
+Nsganetv2:
+Evolutionary multi-objective surrogate-assisted neural architecture
+search. In Computer Vision – ECCV 2020, pages 35–51, Cham, 2020. Springer Inter-
+national Publishing.
+Jovita Lukasik, David Friede, Arber Zela, Frank Hutter, and Margret Keuper. Smooth
+variational graph embeddings for efficient neural architecture search. In International
+Joint Conference on Neural Networks (IJCNN), 2021.
+Jovita Lukasik, Steffen Jung, and Margret Keuper. Learning where to look–generative nas
+is surprisingly efficient. In The European Conference on Computer Vision (ECCV), 2022.
+Jelena Luketina, Mathias Berglund, Klaus Greff, and Tapani Raiko. Scalable gradient-based
+tuning of continuous regularization hyperparameters. In Proceedings of the International
+Conference on Machine Learning (ICML), pages 2952–2960, 2016.
+Renqian Luo, Xu Tan, Rui Wang, Tao Qin, Enhong Chen, and Tie-Yan Liu. Semi-supervised
+neural architecture search. In Proceedings of the Annual Conference on Neural Informa-
+tion Processing Systems (NeurIPS), 2020.
+Sebastian Lutz, Konstantinos Amplianitis, and Aljoscha Smolic. Alphagan: Generative ad-
+versarial networks for natural image matting. In The British Machine Vision Conference
+(BMVC), 2018.
+Lizheng Ma, Jiaxu Cui, and Bo Yang. Deep neural architecture search with deep graph
+bayesian optimization. In 2019 IEEE/WIC/ACM International Conference on Web In-
+telligence (WI), pages 500–507. IEEE, 2019.
+Matthew Mackay, Paul Vicol, Jonathan Lorraine, David Duvenaud, and Roger Grosse. Self-
+tuning networks: Bilevel optimization of hyperparameters using structured best-response
+functions. In Proceedings of the International Conference on Learning Representations
+(ICLR), 2019.
+Neeratyoy Mallik and Noor Awad.
+Dehb: Evolutionary hyperband for scalable, robust
+and efficient hyperparameter optimization. In The International Joint Conference on
+Artificial Intelligence (IJCAI), 2021.
+53
+
+White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter
+Abhinav Mehrotra, Alberto Gil C. P. Ramos, Sourav Bhattacharya, �Lukasz Dudziak,
+Ravichander Vipperla, Thomas Chau, Mohamed S Abdelfattah, Samin Ishtiaq, and
+Nicholas Donald Lane. Nas-bench-asr: Reproducible neural architecture search for speech
+recognition. In Proceedings of the International Conference on Learning Representations
+(ICLR), 2021.
+Yash Mehta, Colin White, Arber Zela, Arjun Krishnakumar, Guri Zabergja, Shakiba Mora-
+dian, Mahmoud Safari, Kaicheng Yu, and Frank Hutter. Nas-bench-suite: Nas evaluation
+is (now) surprisingly easy. In Proceedings of the International Conference on Learning
+Representations (ICLR), 2022.
+Joe Mellor, Jack Turner, Amos Storkey, and Elliot J Crowley. Neural architecture search
+without training. In Proceedings of the International Conference on Machine Learning
+(ICML), pages 7588–7598. PMLR, 2021.
+H Mendoza, A Klein, M Feurer, J Springenberg, and F Hutter. Towards automatically-
+tuned neural networks. In ICML 2016 AutoML Workshop, 2016.
+Luke Metz, Ben Poole, David Pfau, and Jascha Sohl-Dickstein. Unrolled generative adver-
+sarial networks. In Proceedings of the International Conference on Learning Representa-
+tions (ICLR), 2017.
+Microsoft. Neural Network Intelligence, 2021. URL https://github.com/microsoft/nni.
+Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed
+representations of words and phrases and their compositionality. In Proceedings of the
+Annual Conference on Neural Information Processing Systems (NeurIPS), 2013.
+Geoffrey F Miller, Peter M Todd, and Shailesh U Hegde. Designing neural networks using
+genetic algorithms. In ICGA, volume 89, pages 379–384, 1989.
+Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G.
+Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig
+Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Ku-
+maran, Daan Wierstra, Shane Legg, and Demis Hassabis. Human-level control through
+deep reinforcement learning. Nature, 518(7540):529–533, Feb 2015.
+Jonas Moˇckus. On bayesian methods for seeking the extremum. In Optimization Techniques
+IFIP Technical Conference, pages 400–404. Springer, 1975.
+J. Pablo Mu˜noz, Nikolay Lyalyushkin, Yash Akhauri, Anastasia Senina, Alexander Kozlov,
+and Nilesh Jain. Enabling NAS with automated super-network generation. AAAI 1st
+International Workshop on Practical Deep Learning in the Wild, 2022.
+Byunggook Na, Jisoo Mok, Hyeokjun Choe, and Sungroh Yoon. Accelerating neural archi-
+tecture search via proxy data. The International Joint Conference on Artificial Intelli-
+gence (IJCAI), 2021.
+54
+
+Neural Architecture Search: Insights from 1000 Papers
+Ashwin Raaghav Narayanan, Arber Zela, Tonmoy Saikia, Thomas Brox, and Frank Hutter.
+Multi-headed neural ensemble search. In Workshop on Uncertainty and Robustness in
+Deep Learning (UDL@ICML‘21), 2021.
+Aviv Navon, Aviv Shamsian, Gal Chechik, and Ethan Fetaya. Learning the pareto front
+with hypernetworks. In Proceedings of the International Conference on Learning Repre-
+sentations (ICLR), 2021.
+Niv Nayman, Asaf Noy, Tal Ridnik, Itamar Friedman, Rong Jin, and Lihi Zelnik. Xnas:
+Neural architecture search with expert advice. Proceedings of the Annual Conference on
+Neural Information Processing Systems (NeurIPS), 32, 2019.
+Renato Negrinho and Geoff Gordon. Deeparchitect: Automatically designing and training
+deep architectures. stat, 1050:28, 2017.
+Vladimir Nekrasov, Hao Chen, Chunhua Shen, and Ian Reid. Fast neural architecture search
+of compact semantic segmentation models via auxiliary cells. In The IEEE Conference
+on Computer Vision and Pattern Recognition (CVPR), June 2019.
+Vu Nguyen, Tam Le, Makoto Yamada, and Michael A Osborne. Optimal transport kernels
+for sequential and parallel neural architecture search. In Proceedings of the International
+Conference on Machine Learning (ICML), pages 8084–8095. PMLR, 2021.
+Alex Nichol, Joshua Achiam, and John Schulman. On first-order meta-learning algorithms.
+arXiv preprint, 2018.
+Xuefei Ning, Yin Zheng, Tianchen Zhao, Yu Wang, and Huazhong Yang. A generic graph-
+based neural architecture encoding scheme for predictor-based nas. In European Confer-
+ence on Computer Vision, pages 189–204. Springer, 2020.
+Xuefei Ning, Changcheng Tang, Wenshuo Li, Zixuan Zhou, Shuang Liang, Huazhong Yang,
+and Yu Wang. Evaluating efficient performance estimators of neural architectures. Pro-
+ceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS),
+34, 2021.
+Matheus Nunes and Gisele L Pappa. Neural architecture search in graph neural networks.
+In Brazilian Conference on Intelligent Systems, pages 302–317. Springer, 2020.
+R. Olson, N. Bartley, R. Urbanowicz, and J. Moore. Evaluation of a Tree-based Pipeline
+Optimization Tool for Automating Data Science.
+In T. Friedrich, editor, Proceedings
+of the Genetic and Evolutionary Computation Conference (GECCO’16), pages 485–492.
+ACM, 2016.
+T Den Ottelander, Arkadiy Dushatskiy, Marco Virgolin, and Peter AN Bosman. Local
+search is a remarkably strong baseline for neural architecture search. In International
+Conference on Evolutionary Multi-Criterion Optimization, 2021.
+Daiyi Peng, Xuanyi Dong, Esteban Real, Mingxing Tan, Yifeng Lu, Gabriel Bender, Hanx-
+iao Liu, Adam Kraft, Chen Liang, and Quoc Le. Pyglove: Symbolic programming for
+55
+
+White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter
+automated machine learning. In Proceedings of the Annual Conference on Neural Infor-
+mation Processing Systems (NeurIPS), 2020.
+Hieu Pham, Melody Guan, Barret Zoph, Quoc Le, and Jeff Dean. Efficient neural archi-
+tecture search via parameters sharing. In Proceedings of the International Conference on
+Machine Learning (ICML), 2018.
+Alo¨ıs Pourchot, Alexis Ducarouge, and Olivier Sigaud. To share or not to share: A com-
+prehensive appraisal of weight-sharing. arXiv preprint arXiv:2002.04289, 2020.
+Vishak Prasad, Colin White, Paarth Jain, Sibasis Nayak, Rishabh Iyer, and Ganesh
+Ramakrishnan.
+Speeding up NAS with adaptive subset selection.
+arXiv preprint
+arXiv:2211.01454, 2022.
+Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al.
+Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019.
+Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, and Piotr Dollar.
+Designing network design spaces. In The IEEE/CVF Conference on Computer Vision
+and Pattern Recognition (CVPR), June 2020.
+Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael
+Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. Exploring the limits of transfer learning
+with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140), 2020.
+Inioluwa Deborah Raji, Emily M Bender, Amandalynne Paullada, Emily Denton, and Alex
+Hanna. Ai and the everything in the whole wide world benchmark. Proceedings of the
+Annual Conference on Neural Information Processing Systems (NeurIPS), Datasets and
+Benchmarks Track, 2021.
+Aditya Rawal, Joel Lehman, Felipe Petroski Such, Jeff Clune, and Kenneth O. Stanley.
+Synthetic petri dish: A novel surrogate model for rapid architecture search, 2020.
+Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie
+Tan, Quoc V. Le, and Alexey Kurakin. Large-scale evolution of image classifiers. In
+Proceedings of the International Conference on Machine Learning (ICML), 2017.
+Esteban Real, Alok Aggarwal, Yanping Huang, and Quoc V Le. Regularized evolution for
+image classifier architecture search. In Proceedings of the AAAI Conference on Artificial
+Intelligence (AAAI), 2019.
+Esteban Real, Chen Liang, David So, and Quoc Le. Automl-zero: Evolving machine learning
+algorithms from scratch.
+In Proceedings of the International Conference on Machine
+Learning (ICML), pages 8007–8019. PMLR, 2020.
+Pengzhen Ren, Yun Xiao, Xiaojun Chang, Po-Yao Huang, Zhihui Li, Xiaojiang Chen,
+and Xin Wang. A comprehensive survey of neural architecture search: Challenges and
+solutions. arXiv preprint arXiv:2006.02903, 2020.
+56
+
+Neural Architecture Search: Insights from 1000 Papers
+Nicholas Roberts, Mikhail Khodak, Tri Dao, Liam Li, Christopher R´e, and Ameet Tal-
+walkar. Rethinking neural operations for diverse tasks. In Proceedings of the Annual
+Conference on Neural Information Processing Systems (NeurIPS), 2021.
+Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for
+biomedical image segmentation. In Nassir Navab, Joachim Hornegger, William M. Wells,
+and Alejandro F. Frangi, editors, Medical Image Computing and Computer-Assisted In-
+tervention – MICCAI 2015, 2015.
+Binxin Ru, Clare Lyle, Lisa Schut, Mark van der Wilk, and Yarin Gal. Revisiting the train
+loss: an efficient performance estimator for neural architecture search. stat, 1050:8, 2020a.
+Binxin Ru, Xingchen Wan, Xiaowen Dong, and Michael Osborne.
+Neural architecture
+search using bayesian optimisation with weisfeiler-lehman kernel. In Proceedings of the
+International Conference on Learning Representations (ICLR), 2021.
+Robin Ru, Pedro Esperan¸ca, and Fabio Maria Carlucci.
+Neural architecture generator
+optimization. Proceedings of the Annual Conference on Neural Information Processing
+Systems (NeurIPS), 33, 2020b.
+Michael Ruchte, Arber Zela, Julien Siems, Josif Grabocka, and Frank Hutter. Naslib: a
+modular and flexible neural architecture search library, 2020.
+Tonmoy Saikia, Yassine Marrakchi, Arber Zela, Frank Hutter, and Thomas Brox. Autodisp-
+net: Improving disparity estimation with automl. In The IEEE International Conference
+on Computer Vision (ICCV), October 2019.
+Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and
+Xi Chen. Improved techniques for training gans. Proceedings of the Annual Conference
+on Neural Information Processing Systems (NeurIPS), 29, 2016.
+Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen.
+Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE con-
+ference on computer vision and pattern recognition, pages 4510–4520, 2018.
+Santanu Santra, Jun-Wei Hsieh, and Chi-Fang Lin. Gradient descent effects on differential
+neural architecture search: A survey. IEEE Access, 9:89602–89618, 2021.
+Shreyas Saxena and Jakob Verbeek. Convolutional neural fabrics. In Proceedings of the
+Annual Conference on Neural Information Processing Systems (NeurIPS), 2016.
+Jurgen Schmidhuber. Evolutionary principles in self-referential learning. on learning how to
+learn: The meta-meta-meta...-hook. Master’s thesis, Technische Universitaet Muenchen,
+Germany, 1987.
+J¨urgen Schmidhuber. Learning to control fast-weight memories: An alternative to dynamic
+recurrent networks. Neural Computation, 4(1):131–139, 1992.
+J¨urgen Schmidhuber. A ‘self-referential’weight matrix. In International conference on arti-
+ficial neural networks, pages 446–450. Springer, 1993.
+57
+
+White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter
+Lennart Schneider, Florian Pfisterer, Martin Binder, and Bernd Bischl. Mutation is all you
+need. In 8th ICML Workshop on Automated Machine Learning (AutoML), 2021.
+Christoph Schorn, Thomas Elsken, Sebastian Vogel, Armin Runge, Andre Guntoro, and
+Gerd Ascheid. Automated design of error-resilient and hardware-efficient deep neural
+networks. In Springer Neural Computing and Applications, 2020.
+John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal
+policy optimization algorithms. ArXiv, abs/1707.06347, 2017.
+Christian Sciuto, Kaicheng Yu, Martin Jaggi, Claudiu Musat, and Mathieu Salzmann. Eval-
+uating the search phase of neural architecture search. In Proceedings of the International
+Conference on Learning Representations (ICLR), 2020.
+Gresa Shala, Thomas Elsken, Frank Hutter, and Josif Grabocka. Transfer NAS with meta-
+learned bayesian surrogates. In Sixth Workshop on Meta-Learning at the Conference on
+Neural Information Processing Systems, 2022.
+Albert Shaw, Daniel Hunter, Forrest Landola, and Sammy Sidhu. Squeezenas: Fast neural
+architecture search for faster semantic segmentation. In The IEEE International Confer-
+ence on Computer Vision (ICCV) Workshops, Oct 2019.
+Junhong Shen, Mikhail Khodak, and Ameet Talwalkar. Efficient architecture search for
+diverse tasks. In Proceedings of the Annual Conference on Neural Information Processing
+Systems (NeurIPS), 2022.
+Yu Shen, Yang Li, Jian Zheng, Wentao Zhang, Peng Yao, Jixiang Li, Sen Yang, Ji Liu,
+and Cui Bin. Proxybo: Accelerating neural architecture search via bayesian optimization
+with zero-cost proxies. arXiv preprint arXiv:2110.10423, 2021.
+Han Shi, Renjie Pi, Hang Xu, Zhenguo Li, James Kwok, and Tong Zhang. Bridging the gap
+between sample-based and one-shot neural architecture search with bonas. In Proceedings
+of the Annual Conference on Neural Information Processing Systems (NeurIPS), 2020.
+Jae-hun Shim, Kyeongbo Kong, and Suk-Ju Kang. Core-set sampling for efficient neural
+architecture search. arXiv preprint arXiv:2107.06869, 2021.
+Yao Shu, Shaofeng Cai, Zhongxiang Dai, Beng Chin Ooi, and Bryan Kian Hsiang Low.
+Nasi: Label-and data-agnostic neural architecture search at initialization. In Proceedings
+of the International Conference on Learning Representations (ICLR), 2021.
+Yao Shu, Yizhou Chen, Zhongxiang Dai, and Bryan Low.
+Neural ensemble search via
+bayesian sampling. In Uncertainty in Artificial Intelligence (UAI), 2022.
+Julien Siems, Lucas Zimmer, Arber Zela, Jovita Lukasik, Margret Keuper, and Frank Hut-
+ter. Nas-bench-301 and the case for surrogate benchmarks for neural architecture search.
+arXiv preprint arXiv:2008.09777, 2020.
+58
+
+Neural Architecture Search: Insights from 1000 Papers
+David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van
+Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc
+Lanctot, et al. Mastering the game of go with deep neural networks and tree search.
+Nature, 529(7587):484–489, 2016.
+David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang,
+Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, et al. Master-
+ing the game of go without human knowledge. Nature, 550(7676):354–359, 2017.
+David So, Quoc Le, and Chen Liang.
+The evolved transformer.
+In Proceedings of the
+International Conference on Machine Learning (ICML). PMLR, 2019.
+David R. So, Wojciech Ma´nke, Hanxiao Liu, Zihang Dai, Noam Shazeer, and Quoc V. Le.
+Primer: Searching for efficient transformers for language modeling, 2021.
+Gowthami Somepalli, Micah Goldblum, Avi Schwarzschild, C Bayan Bruss, and Tom Gold-
+stein. Saint: Improved neural networks for tabular data via row attention and contrastive
+pre-training. arXiv preprint arXiv:2106.01342, 2021.
+Dehua Song, Chang Xu, Xu Jia, Yiyi Chen, Chunjing Xu, and Yunhe Wang. Efficient resid-
+ual dense block search for image super-resolution. In Proceedings of the AAAI Conference
+on Artificial Intelligence (AAAI), volume 34, pages 12007–12014, 2020.
+Jost Tobias Springenberg, Aaron Klein, Stefan Falkner, and Frank Hutter. Bayesian opti-
+mization with robust bayesian neural networks. In Proceedings of the Annual Conference
+on Neural Information Processing Systems (NeurIPS), pages 4134–4142, 2016.
+Niranjan Srinivas, Andreas Krause, Sham Kakade, and Matthias Seeger. Gaussian process
+optimization in the bandit setting: No regret and experimental design. In Proceedings of
+the 27th International Conference on Machine Learning. Omnipress, 2010.
+Kenneth O Stanley and Risto Miikkulainen. Evolving neural networks through augmenting
+topologies. Evolutionary computation, 10(2):99–127, 2002.
+Kenneth O Stanley, David B D’Ambrosio, and Jason Gauci. A hypercube-based encoding
+for evolving large-scale neural networks. Artificial life, 15(2):185–212, 2009.
+Rainer Storn and Kenneth Price. Differential evolution – a simple and efficient heuristic
+for global optimization over continuous spaces. J. of Global Optimization, 11(4):341–359,
+dec 1997.
+Xiu Su, Shan You, Jiyang Xie, Mingkai Zheng, Fei Wang, Chen Qian, Changshui Zhang,
+Xiaogang Wang, and Chang Xu. Vitas: Vision transformer architecture search. arXiv
+preprint arXiv:2106.13700, 2021.
+Felipe Petroski Such, Aditya Rawal, Joel Lehman, Kenneth Stanley, and Jeffrey Clune.
+Generative teaching networks: Accelerating neural architecture search by learning to
+generate synthetic training data. In Proceedings of the International Conference on Ma-
+chine Learning (ICML), pages 9206–9216. PMLR, 2020.
+59
+
+White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter
+Masanori Suganuma, Shinichi Shirakawa, and Tomoharu Nagao. A genetic programming
+approach to designing convolutional neural network architectures. In Proceedings of the
+genetic and evolutionary computation conference, pages 497–504, 2017.
+Masanori Suganuma, Mete Ozay, and Takayuki Okatani. Exploiting the potential of stan-
+dard convolutional autoencoders for image restoration by evolutionary search. In Pro-
+ceedings of the International Conference on Machine Learning (ICML), pages 4771–4780.
+PMLR, 2018.
+Rhea Sukthanker, Samuel Dooley, John P Dickerson, Colin White, Frank Hutter, and Micah
+Goldblum. On the importance of architectures and hyperparameters for fairness in face
+recognition. arXiv preprint arXiv:2210.09943, 2022.
+Yanan Sun, Bing Xue, Mengjie Zhang, and Gary G Yen. Evolving deep convolutional neural
+networks for image classification. IEEE Transactions on Evolutionary Computation, 24
+(2):394–407, 2019.
+Yanan Sun, Bing Xue, Mengjie Zhang, Gary G Yen, and Jiancheng Lv. Automatically
+designing cnn architectures using the genetic algorithm for image classification. IEEE
+transactions on cybernetics, 50(9):3840–3854, 2020.
+Kevin Swersky, David Duvenaud, Jasper Snoek, Frank Hutter, and Michael A. Osborne.
+Raiders of the lost architecture: Kernels for bayesian optimization in conditional param-
+eter spaces. arXiv preprint arXiv:1409.4011, 2014.
+Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, and Alexander A Alemi. Inception-v4,
+inception-resnet and the impact of residual connections on learning. In Thirty-first AAAI
+conference on artificial intelligence, 2017.
+Mingxing Tan and Quoc Le. Efficientnet: Rethinking model scaling for convolutional neural
+networks. In Proceedings of the International Conference on Machine Learning (ICML),
+pages 6105–6114. PMLR, 2019.
+Mingxing Tan, Bo Chen, Ruoming Pang, Vijay Vasudevan, Mark Sandler, Andrew Howard,
+and Quoc V Le.
+Mnasnet: Platform-aware neural architecture search for mobile.
+In
+Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019.
+Hidenori Tanaka, Daniel Kunin, Daniel L Yamins, and Surya Ganguli.
+Pruning neural
+networks without any data by iteratively conserving synaptic flow. Proceedings of the
+Annual Conference on Neural Information Processing Systems (NeurIPS), 33:6377–6389,
+2020.
+Manoel Tenorio and Wei-Tsih Lee. Self organizing neural networks for the identification
+problem. Proceedings of the Annual Conference on Neural Information Processing Sys-
+tems (NeurIPS), 1, 1988.
+Lucas Theis, Iryna Korshunova, Alykhan Tejani, and Ferenc Husz´ar. Faster gaze prediction
+with dense networks and fisher pruning. arXiv preprint arXiv:1801.05787, 2018.
+60
+
+Neural Architecture Search: Insights from 1000 Papers
+C. Thornton, F. Hutter, H. Hoos, and K. Leyton-Brown. Auto-WEKA: combined selection
+and hyperparameter optimization of classification algorithms. In I. Dhillon, Y. Koren,
+R. Ghani, T. Senator, P. Bradley, R. Parekh, J. He, R. Grossman, and R. Uthurusamy,
+editors, The 19th ACM SIGKDD International Conference on Knowledge Discovery and
+Data Mining (KDD’13), pages 847–855, 2013.
+Sebastian Thrun and Lorien Pratt. Learning to learn. In Springer Science+Business Media,
+1998.
+Yuan Tian, Qin Wang, Zhiwu Huang, Wen Li, Dengxin Dai, Minghao Yang, Jun Wang, and
+Olga Fink. Off-policy reinforcement learning for efficient and effective gan architecture
+search. In European Conference on Computer Vision, pages 175–192. Springer, 2020.
+Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles,
+and Herv´e J´egou. Training data-efficient image transformers & distillation through at-
+tention. In International Conference on Machine Learning, pages 10347–10357. PMLR,
+2021.
+Renbo Tu, Nicholas Roberts, Mikhail Khodak, Junhong Shen, Frederic Sala, and Ameet
+Talwalkar. NAS-bench-360: Benchmarking neural architecture search on diverse tasks.
+In Proceedings of the Annual Conference on Neural Information Processing Systems
+(NeurIPS), Datasets and Benchmarks Track, 2022a.
+Renbo Tu, Nicholas Roberts, Vishak Prasad, Sibasis Nayak, Paarth Jain, Frederic Sala,
+Ganesh Ramakrishnan, Ameet Talwalkar, Willie Neiswanger, and Colin White. Automl
+for climate change: A call to action. arXiv preprint arXiv:2210.03324, 2022b.
+Joaquin Vanschoren. Meta-learning. In Hutter et al. (2019), pages 39–68.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N
+Gomez, �Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Proceedings
+of the Annual Conference on Neural Information Processing Systems (NeurIPS), pages
+5998–6008, 2017.
+Xingchen Wan, Binxin Ru, Pedro M Esparan¸ca, and Fabio Maria Carlucci. Approximate
+neural architecture search via operation distribution learning.
+In Proceedings of the
+IEEE/CVF Winter Conference on Applications of Computer Vision, pages 2377–2386,
+2022a.
+Xingchen Wan, Binxin Ru, Pedro M Esperan¸ca, and Zhenguo Li.
+On redundancy and
+diversity in cell-based neural architecture search.
+In Proceedings of the International
+Conference on Learning Representations (ICLR), 2022b.
+Chaoqi Wang, Guodong Zhang, and Roger Grosse. Picking winning tickets before training
+by preserving gradient flow. In Proceedings of the International Conference on Learning
+Representations (ICLR), 2020a.
+Hanchao Wang and Jun Huan. Agan: Towards automated design of generative adversarial
+networks. arXiv preprint arXiv:1906.11080, 2019.
+61
+
+White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter
+Linnan Wang, Yiyang Zhao, Yuu Jinnai, and Rodrigo Fonseca. Alphax: exploring neural
+architectures with deep neural networks and monte carlo tree search.
+arXiv preprint
+arXiv:1805.07440, 2018.
+Linnan Wang, Yiyang Zhao, Yuu Jinnai, Yuandong Tian, and Rodrigo Fonseca. Neural
+architecture search using deep neural networks and monte carlo tree search. In Proceedings
+of the AAAI Conference on Artificial Intelligence, volume 34, number 06, pages 9983–
+9991, 2020b.
+Ning Wang, Yang Gao, Hao Chen, Peng Wang, Zhi Tian, Chunhua Shen, and Yanning
+Zhang. Nas-fcos: Fast neural architecture search for object detection. In The IEEE/CVF
+Conference on Computer Vision and Pattern Recognition (CVPR), June 2020c.
+Ruochen Wang, Minhao Cheng, Xiangning Chen, Xiaocheng Tang, and Cho-Jui Hsieh.
+Rethinking architecture selection in differentiable nas. In Proceedings of the International
+Conference on Learning Representations (ICLR), 2021.
+Zi Wang and Stefanie Jegelka. Max-value entropy search for efficient bayesian optimization.
+In Proceedings of the International Conference on Machine Learning (ICML), pages 3627–
+3635. PMLR, 2017.
+Tao Wei, Changhu Wang, Yong Rui, and Chang Wen Chen. Network morphism. In Pro-
+ceedings of the International Conference on Machine Learning (ICML), 2016.
+Lilian Weng. Neural architecture search, 2020. URL https://lilianweng.github.io/
+posts/2020-08-06-nas/.
+Colin White, Willie Neiswanger, Sam Nolen, and Yash Savani. A study on encodings for
+neural architecture search. In Proceedings of the Annual Conference on Neural Informa-
+tion Processing Systems (NeurIPS), 2020.
+Colin White, Willie Neiswanger, and Yash Savani. Bananas: Bayesian optimization with
+neural architectures for neural architecture search. In Proceedings of the AAAI Conference
+on Artificial Intelligence (AAAI), 2021a.
+Colin White, Sam Nolen, and Yash Savani. Exploring the loss landscape in neural archi-
+tecture search. In Uncertainty in Artificial Intelligence (UAI), pages 654–664. PMLR,
+2021b.
+Colin White, Arber Zela, Binxin Ru, Yang Liu, and Frank Hutter.
+How powerful are
+performance predictors in neural architecture search?
+In Proceedings of the Annual
+Conference on Neural Information Processing Systems (NeurIPS), 2021c.
+Colin White, Mikhail Khodak, Renbo Tu, Shital Shah, S´ebastien Bubeck, and Dey De-
+badeepta. A deeper look at zero-cost proxies for lightweight nas. In ICLR Blog Track,
+2022. URL http://0.0.0.0:4000/2021/12/01/zero-cost-proxies/.
+Ronald J. Williams. Simple statistical gradient-following algorithms for connectionist rein-
+forcement learning. Mach. Learn., 8(3–4):229–256, may 1992.
+62
+
+Neural Architecture Search: Insights from 1000 Papers
+Martin Wistuba. Finding competitive network architectures within a day using uct. Proceed-
+ings of the 5th IEEE International Conference on Data Science and Advanced Analytics,
+pages 263-272, 2018. arXiv preprint arXiv:1712.07420.
+Martin Wistuba.
+Deep learning architecture search by neuro-cell-based evolution with
+function-preserving mutations.
+In Michele Berlingerio, Francesco Bonchi, Thomas
+G¨artner, Neil Hurley, and Georgiana Ifrim, editors, Machine Learning and Knowledge
+Discovery in Databases, pages 243–258, Cham, 2019. Springer International Publishing.
+Martin Wistuba, Ambrish Rawat, and Tejaswini Pedapati. A survey on neural architecture
+search. arXiv preprint arXiv:1905.01392, 2019.
+Catherine Wong, Neil Houlsby, Yifeng Lu, and Andrea Gesmundo. Transfer learning with
+neural automl. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi,
+and R. Garnett, editors, Proceedings of the Annual Conference on Neural Information
+Processing Systems (NeurIPS), 2018.
+Bichen Wu, Xiaoliang Dai, Peizhao Zhang, Yanghan Wang, Fei Sun, Yiming Wu, Yuandong
+Tian, Peter Vajda, Yangqing Jia, and Kurt Keutzer. Fbnet: Hardware-aware efficient
+convnet design via differentiable neural architecture search. In The IEEE Conference on
+Computer Vision and Pattern Recognition (CVPR), June 2019a.
+Bichen Wu, Xiaoliang Dai, Peizhao Zhang, Yanghan Wang, Fei Sun, Yiming Wu, Yuandong
+Tian, Peter Vajda, Yangqing Jia, and Kurt Keutzer. Fbnet: Hardware-aware efficient con-
+vnet design via differentiable neural architecture search. In Proceedings of the IEEE/CVF
+Conference on Computer Vision and Pattern Recognition (CVPR), pages 10734–10742,
+2019b.
+Yan Wu, Zhiwu Huang, Suryansh Kumar, Rhea Sanjay Sukthanker, Radu Timofte, and Luc
+Van Gool. Trilevel neural architecture search for efficient single image super-resolution.
+arXiv preprint arXiv:2101.06658, 2021.
+Lichuan Xiang, �Lukasz Dudziak, Mohamed S Abdelfattah, Thomas Chau, Nicholas D
+Lane, and Hongkai Wen. Zero-cost proxies meet differentiable architecture search. arXiv
+preprint arXiv:2106.06799, 2021.
+Lingxi Xie and Alan Yuille. Genetic cnn. In Proceedings of the IEEE international confer-
+ence on computer vision, pages 1379–1388, 2017.
+Lingxi Xie, Xin Chen, Kaifeng Bi, Longhui Wei, Yuhui Xu, Lanfei Wang, Zhengsu Chen,
+An Xiao, Jianlong Chang, Xiaopeng Zhang, et al. Weight-sharing neural architecture
+search: A battle to shrink the optimization gap. ACM Computing Surveys (CSUR), 54
+(9):1–37, 2021.
+Sirui Xie, Hehui Zheng, Chunxiao Liu, and Liang Lin. Snas: stochastic neural architec-
+ture search. In Proceedings of the International Conference on Learning Representations
+(ICLR), 2018.
+63
+
+White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter
+Hang Xu, Lewei Yao, Wei Zhang, Xiaodan Liang, and Zhenguo Li. Auto-fpn: Automatic
+network architecture adaptation for object detection beyond classification. In The IEEE
+International Conference on Computer Vision (ICCV), October 2019a.
+Jin Xu, Xu Tan, Renqian Luo, Kaitao Song, Jian Li, Tao Qin, and Tie-Yan Liu. Nas-
+bert.
+Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery &
+Data Mining, Aug 2021a. doi: 10.1145/3447548.3467262. URL http://dx.doi.org/10.
+1145/3447548.3467262.
+Jin Xu, Xu Tan, Kaitao Song, Renqian Luo, Yichong Leng, Tao Qin, Tie-Yan Liu, and Jian
+Li. Analyzing and mitigating interference in neural architecture search. In Proceedings
+of the International Conference on Machine Learning (ICML). PMLR, 2022.
+Jingjing Xu, Liang Zhao, Junyang Lin, Rundong Gao, Xu Sun, and Hongxia Yang. Knas:
+green neural architecture search. In International Conference on Machine Learning, pages
+11613–11625. PMLR, 2021b.
+Yuhui Xu, Lingxi Xie, Xiaopeng Zhang, Xin Chen, Guo-Jun Qi, Qi Tian, and Hongkai
+Xiong. Pc-darts: Partial channel connections for memory-efficient architecture search. In
+Proceedings of the International Conference on Learning Representations (ICLR), 2019b.
+Shen Yan, Yu Zheng, Wei Ao, Xiao Zeng, and Mi Zhang. Does unsupervised architecture
+representation learning help neural architecture search?
+In Proceedings of the Annual
+Conference on Neural Information Processing Systems (NeurIPS), 2020.
+Shen Yan, Kaiqiang Song, Fei Liu, and Mi Zhang. Cate: Computation-aware neural archi-
+tecture encoding with transformers. In Proceedings of the International Conference on
+Machine Learning (ICML), 2021a.
+Shen Yan, Colin White, Yash Savani, and Frank Hutter. Nas-bench-x11 and the power of
+learning curves. In Proceedings of the Annual Conference on Neural Information Process-
+ing Systems (NeurIPS), 2021b.
+Antoine Yang, Pedro M Esperan¸ca, and Fabio M Carlucci.
+Nas evaluation is frustrat-
+ingly hard. In Proceedings of the International Conference on Learning Representations
+(ICLR), 2020.
+Lewei Yao, Hang Xu, Wei Zhang, Xiaodan Liang, and Zhenguo Li. Sm-nas: Structural-
+to-modular neural architecture search for object detection. In Proceedings of the AAAI
+Conference on Artificial Intelligence (AAAI), 2020.
+Quanming Yao, Mengshuo Wang, Yuqiang Chen, Wenyuan Dai, Yu-Feng Li, Wei-Wei Tu,
+Qiang Yang, and Yang Yu. Taking human out of learning applications: A survey on
+automated machine learning. arXiv preprint arXiv:1810.13306, 2018.
+Yichun Yin, Cheng Chen, Lifeng Shang, Xin Jiang, Xiao Chen, and Qun Liu. Autotinybert:
+Automatic hyper-parameter optimization for efficient pre-trained language models. In
+ACL, 2021.
+64
+
+Neural Architecture Search: Insights from 1000 Papers
+Chris Ying, Aaron Klein, Esteban Real, Eric Christiansen, Kevin Murphy, and Frank Hut-
+ter. Nas-bench-101: Towards reproducible neural architecture search. In Proceedings of
+the International Conference on Machine Learning (ICML), 2019.
+Kaicheng Yu, Rene Ranftl, and Mathieu Salzmann.
+How to train your super-net: An
+analysis of training heuristics in weight-sharing nas. arXiv preprint arXiv:2003.04276,
+2020.
+Tong Yu and Hong Zhu. Hyper-parameter optimization: A review of algorithms and appli-
+cations. arXiv preprint arXiv:2003.05689, 2020.
+Sergey Zagoruyko and Nikos Komodakis.
+Wide residual networks.
+In British Machine
+Vision Conference, 2016.
+Manzil Zaheer, Satwik Kottur, Siamak Ravanbakhsh, Barnabas Poczos, Russ R Salakhut-
+dinov, and Alexander J Smola. Deep sets. In Proceedings of the Annual Conference on
+Neural Information Processing Systems (NeurIPS), 2017.
+Sheheryar Zaidi, Arber Zela, Thomas Elsken, Chris C Holmes, Frank Hutter, and Yee Teh.
+Neural ensemble search for uncertainty estimation and dataset shift. Proceedings of the
+Annual Conference on Neural Information Processing Systems (NeurIPS), 34:7898–7911,
+2021.
+Amir R Zamir, Alexander Sax, William Shen, Leonidas J Guibas, Jitendra Malik, and Silvio
+Savarese. Taskonomy: Disentangling task transfer learning. In Proceedings of the IEEE
+conference on computer vision and pattern recognition, pages 3712–3722, 2018.
+Arber Zela, Aaron Klein, Stefan Falkner, and Frank Hutter.
+Towards automated deep
+learning: Efficient joint neural architecture and hyperparameter search. arXiv preprint
+arXiv:1807.06906, 2018.
+Arber Zela, Thomas Elsken, Tonmoy Saikia, Yassine Marrakchi, Thomas Brox, and Frank
+Hutter. Understanding and robustifying differentiable architecture search. In Proceedings
+of the International Conference on Learning Representations (ICLR), 2020a.
+Arber Zela, Julien Siems, and Frank Hutter. Nas-bench-1shot1: Benchmarking and dissect-
+ing one-shot neural architecture search. In Proceedings of the International Conference
+on Learning Representations (ICLR), 2020b.
+Chris Zhang, Mengye Ren, and Raquel Urtasun. Graph hypernetworks for neural architec-
+ture search. In Proceedings of the International Conference on Learning Representations
+(ICLR), 2018.
+Haokui Zhang, Ying Li, Hao Chen, and Chunhua Shen. Memory-efficient hierarchical neural
+architecture search for image denoising. In Proceedings of the IEEE/CVF Conference on
+Computer Vision and Pattern Recognition (CVPR), pages 3657–3666, 2020a.
+Miao Zhang, Steven W Su, Shirui Pan, Xiaojun Chang, Ehsan M Abbasnejad, and Reza
+Haffari. idarts: Differentiable architecture search with stochastic implicit gradients. In
+International Conference on Machine Learning, pages 12557–12566. PMLR, 2021a.
+65
+
+White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter
+Muhan Zhang, Shali Jiang, Zhicheng Cui, Roman Garnett, and Yixin Chen. D-vae: A vari-
+ational autoencoder for directed acyclic graphs. In Proceedings of the Annual Conference
+on Neural Information Processing Systems (NeurIPS), 2019.
+Yuge Zhang, Zejun Lin, Junyang Jiang, Quanlu Zhang, Yujing Wang, Hui Xue, Chen
+Zhang, and Yaming Yang. Deeper insights into weight sharing in neural architecture
+search. arXiv preprint arXiv:2001.01431, 2020b.
+Ziwei Zhang, Xin Wang, and Wenwu Zhu.
+Automated machine learning on graphs: A
+survey. IJCAI Survey Track, 2021b. arXiv preprint arXiv:2103.00742.
+Huan Zhao, Lanning Wei, and Quanming Yao. Simplifying architecture search for graph
+neural network. arXiv preprint arXiv:2008.11652, 2020a.
+Yiren Zhao, Duo Wang, Xitong Gao, Robert Mullins, Pietro Lio, and Mateja Jamnik. Prob-
+abilistic dual network architecture search on graphs. arXiv preprint arXiv:2003.09676,
+2020b.
+Yiyang Zhao, Linnan Wang, Kevin Yang, Tianjun Zhang, Tian Guo, and Yuandong Tian.
+Multi-objective optimization by learning space partition. In International Conference on
+Learning Representations, 2021a.
+Yuekai Zhao, Li Dong, Yelong Shen, Zhihua Zhang, Furu Wei, and Weizhu Chen. Memory-
+efficient differentiable transformer architecture search. Findings of the Association for
+Computational Linguistics, 2021b.
+Dongzhan Zhou, Xinchi Zhou, Wenwei Zhang, Chen Change Loy, Shuai Yi, Xuesen Zhang,
+and Wanli Ouyang. Econas: Finding proxies for economical neural architecture search. In
+Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition
+(CVPR), pages 11396–11404, 2020.
+Kaichen Zhou, Lanqing Hong, Shoukang Hu, Fengwei Zhou, Binxin Ru, Jiashi Feng, and
+Zhenguo Li. Dha: End-to-end joint optimization of data augmentation policy, hyper-
+parameter and architecture. arXiv preprint arXiv:2109.05765, 2021.
+Kaixiong Zhou, Qingquan Song, Xiao Huang, and Xia Hu. Auto-gnn: Neural architecture
+search of graph neural networks. arXiv preprint arXiv:1909.03184, 2019.
+Lucas Zimmer, Marius Lindauer, and Frank Hutter. Auto-pytorch tabular: Multi-fidelity
+metalearning for efficient and robust autodl. IEEE Transactions on Pattern Analysis and
+Machine Intelligence, 2021.
+Barret Zoph and Quoc V. Le. Neural architecture search with reinforcement learning. In
+Proceedings of the International Conference on Learning Representations (ICLR), 2017.
+Barret Zoph, Vijay Vasudevan, Jonathon Shlens, and Quoc V Le. Learning transferable
+architectures for scalable image recognition. In CVPR, 2018.
+66
+
diff --git a/adFAT4oBgHgl3EQf4h70/content/tmp_files/load_file.txt b/adFAT4oBgHgl3EQf4h70/content/tmp_files/load_file.txt
new file mode 100644
index 0000000000000000000000000000000000000000..15c35d2bbbd5ac426ed06fedbc4a2ba3ac102c0b
--- /dev/null
+++ b/adFAT4oBgHgl3EQf4h70/content/tmp_files/load_file.txt
@@ -0,0 +1,2944 @@
+filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf,len=2943
+page_content='Neural Architecture Search: Insights from 1000 Papers Colin White colin@abacus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='ai Abacus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='AI San Francisco, CA 94105, USA Mahmoud Safari safarim@cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='uni-freiburg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='de University of Freiburg Freiburg im Breisgau, 79110, Germany Rhea Sukthanker sukthank@cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='uni-freiburg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='de University of Freiburg Freiburg im Breisgau, 79110, Germany Binxin Ru robinru@sailyond.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='com Sailyond Technology & Research Institute of Tsinghua University Shenzhen, 518071, China Thomas Elsken thomas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='elsken@de.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='bosch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='com Bosch Center for Artificial Intelligence Renningen, 71272, Germany Arber Zela zelaa@cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='uni-freiburg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='de University of Freiburg Freiburg im Breisgau, 79110, Germany Debadeepta Dey dedey@microsoft.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='com Microsoft Research Redmond, WA 98052, USA Frank Hutter fh@cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='uni-freiburg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='de University of Freiburg & Bosch Center for Artificial Intelligence Freiburg im Breisgau, 79110, Germany Abstract In the past decade, advances in deep learning have resulted in breakthroughs in a variety of areas, including computer vision, natural language understanding, speech recognition, and reinforcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Specialized, high-performing neural architectures are crucial to the success of deep learning in these areas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Neural architecture search (NAS), the process of automating the design of neural architectures for a given task, is an inevitable next step in automating machine learning and has already outpaced the best human-designed architectures on many tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In the past few years, research in NAS has been progressing rapidly, with over 1000 papers released since 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In this survey, we provide an organized and comprehensive guide to neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' We give a taxonomy of search spaces, algorithms, and speedup techniques, and we discuss resources such as benchmarks, best practices, other surveys, and open-source libraries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Keywords: neural architecture search, automated machine learning, deep learning ©2022 Colin White, Mahmoud Safari, Rhea Sukthanker, Binxin Ru, Thomas Elsken, Arber Zela, Debadeepta Dey and Frank Hutter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' License: CC-BY 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='0, see https://creativecommons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='org/licenses/by/4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='0/.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='08727v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='LG] 20 Jan 2023 White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Introduction In the past decade, deep learning has become the dominant paradigm in machine learning for a variety of applications and has been used in a number of breakthroughs across computer vision (He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2016a;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Huang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Krizhevsky et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2012;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Szegedy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2017), natural language understanding (Bahdanau et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2015;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Hochreiter and Schmidhuber, 1997;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Vaswani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2017), speech recognition (Chan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2016;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Chorowski et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2015;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Hannun et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2014), and reinforcement learning (Mnih et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2015;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Silver et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2016);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' it is also becoming a very powerful approach for the analysis of tabular data (Hollmann et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Kadra et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Somepalli et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' While many factors played into the rise of deep learning approaches, including deep learning’s ability to automate feature extraction, as well as an increase in data and the larger availability of computational resources, the design of high-performing neural architectures has been crucial to the success of deep learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Recently, just as manual feature engineering was replaced by automated feature learning via deep learning, it is getting more and more common to automate the time-consuming architecture design step via neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Neural architecture search (NAS), the process of automating the design of neural architectures for a given task, has already outpaced the best human-designed architectures on many tasks (Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Du et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Ghiasi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' So et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Zoph et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2018), notably ImageNet (Hu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2018a;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Real et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Zoph et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2018), as well as diverse and less-studied datasets (Shen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2022), and in memory- or latency-constrained settings (Benmeziane et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Indeed, in the past few years, research in NAS has been progressing rapidly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Although several surveys have been written for NAS and related areas in the past (Elsken et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019b;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Wistuba et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019, also see Section 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='2), over 1000 new NAS papers have been released in the last two years, warranting the need for an up-to-date survey on over-arching advances, which we aim to provide with this work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='1 A Brief History of NAS and Relation to Other Fields NAS emerged as a subfield of automated machine learning (AutoML) (Hutter et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019), the process of automating all steps in the machine learning pipeline, from data cleaning, to feature engineering and selection, to hyperparameter and architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' NAS has a large overlap with hyperparameter optimization (HPO) (Feurer and Hutter, 2019), which refers to the automated optimization of hyperparameters of the machine learning model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' NAS is sometimes referred to as a subset of HPO (Li and Talwalkar, 2019), since NAS can be expressed as optimizing only the hyperparameters that correspond to the architecture, a subset of the entire set of model hyperparameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' However, the techniques for HPO vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' NAS are often substantially different.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' A typical HPO problem optimizes a mix of continuous and categorical hyperparameters, such as learning rate, dropout rate, batch size, momentum, activation function, normaliza- tion strategy, and so on.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Typically, the domains of most hyperparameters are independent (that is, the set of possible values for each hyperparameter is not affected by the possible values of other hyperparameters).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Therefore, the typical search space of an HPO problem is the product space of a mix of continuous and categorical dimensions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' By contrast, NAS is specifically focused on optimizing the topology of the architecture, which can be much more complex.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' The topology is typically represented by a directed acyclic graph (DAG), in 2 Neural Architecture Search: Insights from 1000 Papers which the nodes or edges are labeled by neural network operations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Therefore, the search space of a NAS problem is typically discrete1 and can be represented directly as a graph, or as a hierarchical structure of conditional hyperparameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Although standard HPO algorithms can sometimes be adapted for NAS (Izquierdo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Klein et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020c;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Mendoza et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2016;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Zela et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Zimmer et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021), it is often much more efficient and effective to use NAS techniques which are tailored to optimize the intricate space of neural architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Furthermore, most modern NAS techniques go beyond black-box optimization algorithms by exploiting details specific to NAS, such as sharing weights among similar neural architectures to avoid training each of them from scratch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 2015 2016 2017 2018 2019 2020 2021 2022 Year 0 100 200 300 400 500 600 700 Num.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' papers Figure 1: Number of NAS papers by year.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Historically, NAS has been around since at least the late 1980s (Angeline et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 1994;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Kitano, 1990;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Miller et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 1989;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Teno- rio and Lee, 1988) but it did not gain widespread attention until the popular pa- per, NAS with Reinforcement Learning, by Zoph and Le (2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' There has since been a huge interest in NAS, with over 1000 papers released in the last two years (see Figure 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' By now, many different approaches, such as reinforcement learning, evolution- ary algorithms, Bayesian optimization, and NAS-specific techniques based on weight sharing have been explored.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Perhaps the most popular recent approaches are one- shot techniques (Bender et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019c), which often substantially speed up the search process compared to black-box optimization techniques.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In recent years, a large body of follow-up work has focused on making one-shot methods more robust and reli- able (Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Zela et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In parallel, there has been a large push to make NAS research more reproducible and scientific, starting with the release of NAS-Bench-101 (Ying et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019), the first tabular benchmark for NAS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Furthermore, while the early days of NAS has mostly focused on image classification problems such as CIFAR-10 and Ima- geNet, the field has now expanded to many other domains, such as object detection (Ghiasi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Xu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019a), semantic segmentation (Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019a), speech recognition (Mehrotra et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021), partial differential equation solving (Roberts et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Shen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Tu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2022a), protein folding (Roberts et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Shen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2022), and weather prediction (Tu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2022b), and the field has seen a renewed interest in natural language processing (Chitty-Venkata et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Javaheripi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='2 Background and Definitions Prior NAS surveys (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Elsken et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019b;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Wistuba et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019) have referred to three dimensions of NAS: search space, search strategy, and performance evaluation strategy (see 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Notably, some NAS techniques such as DARTS (Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019c) relax the domain to be continuous during the search, but then the hyperparameters are discretized in order to return the final architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 3 White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter Search Strategy Performance Estimation Strategy Search Space Architecture a Performance estimate of a One-shot methods: jointly learning architecture hyperparameters and weights Architecture encoding method Figure 2: Overview of neural architecture search (Elsken et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019b;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Weng, 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' A search strategy iteratively selects architectures (typically by using an architecture encoding method) from a predefined search space A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' The architectures are passed to a performance estimation strategy, which returns the performance estimate to the search strategy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' For one-shot methods, the search strategy and performance estimation strategy are inherently coupled.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Figure 2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' We define each term below, as this is a useful disambiguation for understanding many NAS methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' However, it is worth noting that the trichotomy cannot be applied to the large sub-area of one-shot methods, because for these methods, the search strategy is coupled with the performance evaluation strategy (Xie et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' A search space is the set of all architectures that the NAS algorithm is allowed to select.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Common NAS search spaces range in size from a few thousand to over 1020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' While the search space in principle can be extremely general, incorporating domain knowledge when designing the search space can simplify the search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' However, adding too much domain knowledge introduces human bias, which reduces the chances of a NAS method finding truly novel architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Search spaces are discussed in more detail in Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' A search strategy is an optimization technique used to find a high-performing archi- tecture in the search space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' There are generally two main categories of search strategies: black-box optimization based techniques (including multi-fidelity techniques) and one-shot techniques.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' However, there are some NAS methods for which both or neither category ap- plies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Black-box optimization based techniques, such as reinforcement learning, Bayesian optimization, and evolutionary search, are surveyed in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' One-shot methods, in- cluding supernet- and hypernet-based methods, are surveyed in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' A performance estimation strategy is any method used to quickly predict the perfor- mance of neural architectures in order to avoid fully training the architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' For example, while we can run a discrete search strategy by fully training and evaluating architectures chosen throughout the search, using a performance estimation strategy such as learning curve extrapolation can greatly increase the speed of the search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Performance estimation strategies, and more generally speedup techniques, are surveyed in Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' The most basic definition of NAS is as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Given a search space A , a dataset D, a training pipeline P, and a time or computation budget t, the goal is to find an architecture a ∈ A within budget t which has the highest possible validation accuracy when trained using dataset D and training pipeline P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' A common method of approaching NAS is to 4 EANeural Architecture Search: Insights from 1000 Papers approximately solve the following expression within time t: min a∈A Lval (w∗(a), a) s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' w∗(a) = argminw Ltrain (w, a) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Here, Lval and Ltrain denote the validation loss and training loss, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' While this is the core definition of NAS, other variants will be discussed throughout this survey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' For ex- ample, we may want to return an architecture with constraints on the number of parameters (Section 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='2), or we may use meta-learning (Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='3) to improve performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Throughout the rest of this article, we provide a comprehensive guide to the latest NAS techniques and resources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Sections 2 to 5 are devoted to NAS techniques, surveying search spaces, black-box optimization techniques, one-shot techniques, and speedup techniques, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Sections 6 to 10 cover extensions, applications, and resources, and Section 11 concludes by discussing promising future directions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Search Spaces The search space is perhaps the most essential ingredient of NAS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' While other areas of AutoML overlap with NAS in terms of the optimization methods used, the architectural search space is unique to NAS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Furthermore, the search space is often the first step when setting up NAS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' The majority of popular search spaces are task-specific and were heavily inspired by the state-of-the-art manual architectures in their respective application domains.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' For example, NAS-Bench-101, a popular image classification search space (Ying et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019) was inspired by ResNet (He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2016a) and Inception (Szegedy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In fact, the design of the search space represents an important trade-off between human bias and efficiency of search: if the size of the search space is small and includes many hand- picked decisions, then NAS algorithms will have an easier time finding a high-performing architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' On the other hand, if the search space is large with more primitive building blocks, a NAS algorithm will need to run longer, but there is the possibility of discovering truly novel architectures (Real et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In this section, we survey the main categories of search spaces for NAS as summarized in Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' We start in Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='1 by defining general terminology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Sections 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='2 and 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='3, we discuss the relatively simple macro and chain-structured search spaces, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='4, we describe the most popular type of search space: the cell-based search space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='5, we describe hierarchical search spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Finally, in Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='6, we discuss architecture encodings, an important design decision for NAS algorithms that is inherently tied to the choice of search space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='1 Terminology The search space terminologies differ across the literature, depending on the type of search space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' For clarity, we define the main terms here and in Appendix Figure 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Operation/primitive denotes the atomic unit of the search space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' For nearly all popular search spaces, this is a triplet of a fixed activation, operation, and fixed normalization, such as ReLU-conv 1x1-batchnorm, where the ReLU and BatchNorm are fixed, and the middle operation is a choice among several different operations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 5 White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter Search Spaces Structure Searchable hyperparameters Levels of Topology Macro search space e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' NASBOT (Kandasamy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2018), EfficientNet (Tan and Le, 2019) DAG Operation types, DAG topology, macro hyperparameters 1 Chain-structured search space e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' MobileNetV2 (Sandler et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2018) Chain Operation types, macro hyperparameters 1 Cell-based search space e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' DARTS (Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019c) Duplicated cells Operation type, cell topology 1 Hierarchical search space e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Hier.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Repr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2018b), Auto-DeepLab (Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019b) Varied Operation type, cell/DAG topology, macro hyperparameters > 1 Table 1: Summary of the types of NAS search spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Layer is often used in chain-structured or macro search spaces to denote the same thing as an operation or primitive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' However, it sometimes refers to well-known combinations of operations, such as the inverted bottleneck residual (Cai et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Sandler et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Tan and Le, 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Tan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Block/Module is sometimes used to denote a sequential stack of layers following the notation used in most chain-structured and macro search spaces (Cai et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Tan and Le, 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Tan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Cell is used to denote a directed acyclic graph of operations in cell-based search spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' The maximum number of operations in a cell is often fixed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Motif is used to denote a sub-pattern formed from multiple operations in an architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Some literature refers to a cell as a higher-level motif and a smaller set of operations as a base-level motif.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='2 Macro Search Spaces In the NAS literature, macro search spaces may refer to one of two types.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' First, they may refer to search spaces which encode the entire architecture in one level (as opposed to cell- based or hierarchical search spaces), which were popular in 2017 and 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Second, they may refer to search spaces which focus only on macro-level hyperparameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' For the former, an entire architecture is represented as a single directed acyclic graph (Baker et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Kandasamy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Real et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Zoph and Le, 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' These search spaces typically have a choice of operation at each node in the graph, as well as the choice of DAG topology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' For example, the NASBOT CNN search space (Kandasamy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2018) consists of choices of different convolution, pooling, and fully connected layers, with any DAG topology, with depth of at most 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' The second type of macro search spaces (Dong et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021b;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Duan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Tan and Le, 2019), focus on the variation of macro-level hyperparameters, such as where and how much to downsample the spatial resolution throughout the architecture, while keeping the 6 Neural Architecture Search: Insights from 1000 Papers architecture topology and operations fixed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='2 For example, Tan and Le (2019) propose a CNN search space by varying the network depth, width, and input feature resolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Compared to other search spaces, macro search spaces have high representation power: their flexible structure allows the possibility of discovering novel architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' However, their main downside is that they are very slow to search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In the next two sections, we discuss types of search spaces which have more rigidity, making them faster to search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='3 Chain-Structured Search Spaces Chain-structured search spaces, as the name suggests, have a simple architecture topology: a sequential chain of operation layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' They often take state-of-the-art manual designs, such as ResNet (He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2016b) or MobileNets (Howard et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2017), as the backbone.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' There are several chain-structured search spaces based on convolutional networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Prox- ylessNAS (Cai et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019) starts with the MobileNetV2 (Sandler et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2018) architecture and searches over the kernel sizes and expansion ratios in the inverted bottleneck residual layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' XD (Roberts et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021) and DASH (Shen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2022) start with a LeNet (LeCun et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 1999), ResNet (He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2016a), or WideResNet (Zagoruyko and Komodakis, 2016), and search over an expressive generalization of convolutions based on Kaleidoscope matrices (Dao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020), or kernel sizes and dilations, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Chain-structured search spaces are also popular in transformer-based search spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' For example, the search space from Lightweight Transformer Search (LTS) (Javaheripi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2022) consists of a chain-structured configuration of the popular GPT family of architectures (Brown et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Radford et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019) for autoregressive language modeling, with searchable choices for the number of layers, model dimension, adaptive embedding dimension, dimension of the feedforward neural network in a transformer layer, and number of heads in each transformer layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' The search spaces from NAS-BERT (Xu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021a) and MAGIC (Xu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2022) both consist of a chain-structured search space over the BERT architecture (Devlin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019) with up to 26 operation choices consisting of variants of multi-head attention, feedforward layers, and convolutions with different kernel sizes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Chain-structured search spaces are conceptually simple, making them easy to design and implement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' They also often contain strong architectures that can be found relatively quickly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Their main downside is that, due to the simple architecture topology, there is a comparatively lower chance of discovering a truly novel architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='4 Cell-based Search Spaces The cell-based search space is perhaps the most popular type of search space in NAS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' It is inspired by the fact that state-of-the-art human-designed CNNs often consist of repeated patterns, for example, residual blocks in ResNets (Zoph et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Thus, instead of searching for the entire network architecture from scratch, Zoph et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2018) proposed to only search over relatively small cells, and stack the cells several times in sequence to form the overall architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Formally, the searchable cells make up the micro structure of the search space, while the outer skeleton (the macro structure) is fixed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Strictly speaking, since these search spaces have a fixed architecture topology, they may also be called hyperparameter tuning search spaces instead of NAS search spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 7 White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter NASNet Cell (0perations on Nodes) Concatenate hi hi-1 add Op Op add Op Op add Op Op … hi+1 DARTS Cell (0perations on Edges) hi hi-1 hi+1 0 1 2 3 Operation candidates Architecture Input Normal Cell Output Reduction Cell Normal Cell Reduction Cell Normal Cell x N x N x N Figure 3: Illustration of cell-based search spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' The outer skeleton across cells (left) is fixed, while the cells are searchable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' NASNet assigns operations to nodes (middle) while DARTS assigns operations to edges (right).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' The first modern cell-based search space, NASNet, was proposed by Zoph et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' It comprises of two types of cells: the normal cell and the reduction cell.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Both types have the same structure, but the initial operations in the reduction cell have a stride of two to halve the input spatial resolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Each NASNet cell can be represented as a DAG with seventeen non-input nodes (see Figure 3 (middle)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' The nodes are arranged in triples of two operation nodes (such as convolution and pooling operations) and a combination node (such as addition or concatenation).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' The final NASNet architecture is formed by stacking multiple normal and reduction cells in sequence (see Figure 3 (left)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Overall, there are 1035 unique architectures in the NASNet search space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Since the NASNet search space, many other cell search spaces have been proposed, all of which share a high-level similarity to NASNet, with the main differences being the fixed macro structure, the layout and constraints in the cells, and the choices of operations within the cells.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Two of the most popular cell-based search spaces are NAS-Bench-101 (Ying et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019) and the DARTS search space (Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' NAS-Bench-101 is the first tabular benchmark for NAS (discussed in Section 8), and its cells consist of seven nodes, each with three choices of operations;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' it contains 423 624 unique architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' The DARTS search space differs more fundamentally: while it also has two searchable cells, the DARTS cells have operation choices on the edges of the graph rather than on the nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In the DARTS cell, the nodes represent latent representations and the edges are operations, whereas in the NASNet cell, the latent representations are on the edges and the nodes are operations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' The DARTS cells (see Figure 3 (right)) contain eight edges, each of which have eight choices of operations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Overall, the DARTS space contains a total of 1018 unique architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 8 Neural Architecture Search: Insights from 1000 Papers Besides image classification, similar cell designs have also been adopted for language models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' For example, NAS-Bench-ASR (Mehrotra et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021) provides a search space of convolutional speech model cells for automatic speech recognition, and there are several LSTM-based search spaces (Klyuchnikov et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019c;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Pham et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' The cell-based design significantly reduces the complexity of search spaces, while often resulting in a high-performing final architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' This has led to the cell-based search spaces being the most popular type of search space in recent years.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Furthermore, by detaching the depth of an architecture from the search, the cell-based structure is transferable: the optimal cells learned on a small dataset (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', CIFAR-10) typically transfer well to a large dataset (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', ImageNet) by increasing the number of cells and filters in the overall architecture (Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019c;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Zoph et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Despite their popularity, cell-based search spaces face some criticisms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' First, while the DARTS search space contains a seemingly large number of 1018 architectures, the variance in the performance of DARTS architectures is rather small (Wan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2022b;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Yang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' This small variance may contribute to the fact that sophisticated search strategies can only give marginal gains over the average performance of randomly sampled archi- tectures (Yang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Moreover, there are many ad-hoc design choices and fixed hyperparameters that come with cell-based search spaces whose impact is unclear (Wan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2022b), such as the separation of normal and reduction cells, number of nodes, and set of operations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Finally, although limiting the search to a cell significantly reduces the search complexity, this practice reduces the expressiveness of the NAS search space, making it difficult to find highly novel architectures with cell search spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In light of this, some recent work advocates for searching for macro connections among cells in addition to the micro cell structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' We discuss this in more detail in the next section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='5 Hierarchical Search Spaces Up to this point, all search spaces described have had a flat representation, in which an architecture is built by defining its hyperparameters, topology, and operation primitives in a single design level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Specifically, only one level of topology is searched, whether at the cell level or architecture level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' On the other hand, hierarchical search spaces involve designing motifs at different levels, where each higher-level motif is often represented as a DAG of lower-level motifs (Chrostoforidis et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2018b;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Ru et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' A simple class of hierarchical search spaces has two searchable levels by adding macro- level architecture hyperparameters to cell or chain-structured search spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' For example, the MnasNet search space (Tan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019) uses MobileNetV2 as the backbone.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2019b) designed a two-level search space for semantic image segmentation, and follow-up work extended it to image denoising (Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020a) and stereo matching (Kumari and Kaur, 2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Finally, Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2021a) propose a two-level transformer-based search space for vision tasks inspired by ViT (Dosovitskiy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021) and DeiT (Touvron et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' The search space consists of a number of sequential blocks which can be a combination of local (convolution) or global (self-attention) layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Beyond two levels, Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2018b) and Wu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2021) propose hierarchies of three levels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2018b) propose a three-level hierarachy, where each level is a graph made up of components from the previous level (see Figure 4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Wu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2021) propose a different 9 White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter three-level hierarchy, consisting of kernel hyperparameters, cell-based hyperparameters, and macro hyperparameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' The former design is extended beyond three levels in two follow-up works: Ru et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2020b) proposed a hierarchical design of four levels, controlled by a set of hyperparameters corresponding to a random graph generator, and Chrostoforidis et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2021) introduced a recursive building process to permit a varying number of hierarchical levels as well as a flexible topology among top-level motifs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Hierarchical Search Space 0 1 3 2 1 2 1 2 3x3 convolution 0 1 3 2 0 1 3 2 Level 3 Motif Level 2 Motif Level 1 Operation Primitives Level 3 Motif Graph Unrolled Figure 4: Illustration of hierarchical representation proposed in Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2018b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Level 1 of the hierarchy con- sists of choices of operation primitives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Level 2 con- sists of selecting the topology across small sets of operation primitives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Level 3 consists of selecting the topology across the constructions from level 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' There are multiple ben- efits to using hierarchical search spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' First, hier- archical search spaces tend to be more expressive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Most chain-structured, cell-based, and macro search spaces can be seen as a hierarchical search space with a single searchable level, but having two or more levels allows us to search over more di- verse and complex architec- ture designs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Furthermore, a hierarchical representation of a large architecture is an effective way to reduce the search complexity, which can lead to better search effi- ciency (Chrostoforidis et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2018b;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Ru et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' On the other hand, hierarchical search spaces can be more challenging to implement and search through.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='6 Architecture Encodings Throughout this section, we have discussed a wide variety of NAS search spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' As a segue into the next two sections focusing on search strategies, we note that many NAS algorithms and subroutines need to have a succinct representation of each architecture, or encoding, in order to perform operations such as mutating an architecture, quantifying the similarity between two architectures, or predicting the test performance of an architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' This makes architecture encodings important for several areas of NAS, including discrete NAS algorithms (Section 3) and performance prediction (Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In most search spaces, the architecture can be represented compactly as a directed acyclic graph (DAG), where each node or edge represents an operation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' For example, architectures in cell-based search spaces and chain-structured search spaces can be represented in this way.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' However, hierarchical search spaces cannot be represented fully using a DAG, and often need a conditionally-structured encoding, where the number of levels of conditional hyperparameters correspond to the number of levels of the hierarchy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 10 Neural Architecture Search: Insights from 1000 Papers For cell-based search spaces, one of the most commonly-used encodings is the adjacency matrix along with a list of operations, of the searchable cell(s) (Ying et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Zoph and Le, 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In order to have better generalizablility, Ning et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2020) proposed a graph- based encoding scheme and White et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2021a) proposed a path-based encoding scheme, both of which model the flow of propagating information in the network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Finally, another type of encoding for all search spaces is a learned encoding using unsupervised pre-training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In this technique, before we run NAS, we use a set of untrained architectures to learn an architecture encoding, for example, by using an autoencoder (Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020b;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Lukasik et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021, 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Yan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019) or a transformer (Yan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' When choosing an architecture encoding, scalability and generalizability are important traits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Recent work has shown that different NAS subroutines, such as sampling a random architecture, perturbing an architecture, or training a surrogate model, may each perform best with different encodings (White et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Furthermore, even small changes to the architecture encoding scheme can have significant effects on the performance of NAS (White et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Ying et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Black-Box Optimization Techniques Now that we have covered search spaces, we move to perhaps the most widely-studied com- ponent of NAS: the search strategy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' This is what we run to find an optimal architecture from the search space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Search strategies generally fall into two categories: black-box op- timization techniques and one-shot techniques.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' However, some methods that we discuss include characteristics of both, or neither, of these categories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' We first discuss black-box optimization techniques in this section, followed by one-shot techniques in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' For black-box optimization, we discuss baselines (Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='1), reinforcement learning (Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='2), evolution (Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='3), Bayesian optimization (Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='4), and Monte-Carlo tree search (Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Black-box optimization techniques are widely used and studied today, due to their strong performance and ease of use.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In general, black-box optimization techniques tend to use more computational resources than one-shot techniques, due to training many architectures independently (without sharing weights across architectures like one-shot techniques).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' However, they also have many advantages over one-shot techniques, such as robustness (and the lack of catastrophic failure modes), simpler optimization of non- differentiable objectives, simpler parallelism, joint optimization with other hyperparameters, and easier adaptation to, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', new problems, datasets or search spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' They are also often conceptually simpler, making them easier to implement and use.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='1 Baselines One of the simplest possible baselines for NAS is random search: architectures are selected randomly from the search space and then fully trained.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In the end, the architecture with the best validation accuracy is outputted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Despite its na¨ıvet´e, multiple papers have shown that random search performs surprisingly well (Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Li and Talwalkar, 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Sciuto et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Yang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' This is especially true for highly engineered search spaces with a high fraction of strong architectures, since random search with a budget of k evaluations will, in expectation, find architectures in the top 100/k% of the search space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' However, other works show that random search does not perform well on large, 11 White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter Algorithm 1 General Reinforcement Learning NAS Algorithm Input: Search space A, number of iterations T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Randomly initialize weights θ of the controller architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' for t = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' , T do Train architecture a ∼ π(a;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' θ), randomly sampled from the controller policy π(a;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' θ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Update controller parameters θ by performing a gradient update ∇θEa∼π(a;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='θ)[Lval(a)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' end for Output: Architecture selected from the trained policy π(a;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' θ∗) diverse search spaces (Bender et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Real et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Still, random search is highly recommended as a baseline comparison for new NAS algorithms (Lindauer and Hutter, 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Yang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020), and can be made highly competitive by incorporating weight sharing (Li and Talwalkar, 2019), zero-cost proxies (Abdelfattah et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021), or learning curve extrapolation (Yan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Multiple papers (Sciuto et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Yang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020) have also proposed a related, simpler baseline: random sampling, the average performance of architectures across the entire search space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In addition to random search, recent papers showed that local search is a strong baseline for NAS on both small (Ottelander et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' White et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021b) and large (Siems et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020) search spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' This is true even for the simplest form of local search: iteratively train and evaluate all of the neighbors of the best architecture found so far, where the neighborhood is typically defined as all architectures which differ by one operation or edge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Local search can be sped up substantially by using network morphisms to warm-start the optimization of neighboring architectures (Elsken et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='2 Reinforcement Learning Reinforcement learning (RL) was very prominent in the early days of modern NAS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Notably, the seminal work by Zoph and Le (2017) used RL on 800 GPUs for two weeks to obtain competitive performance on CIFAR-10 and Penn Treebank;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' this finding received substantial media attention and started the modern resurgence of NAS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' This was followed up by several more reinforcement learning approaches (Pham et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Zoph et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Most reinforcement learning approaches model the architectures as a sequence of actions generated by a controller (Baker et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Zoph and Le, 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' The validation accuracy of the sampled architectures after training is used as a reward signal to update the con- troller in order to maximize its expected value.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' See Algorithm 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' The controller is usually a recurrent neural network (RNN) (Zoph and Le, 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Zoph et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2018) that outputs a sequence of components corresponding to an architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' After each outputted architec- ture is trained and evaluated, the RNN parameters are updated to maximize the expected validation accuracy of outputted architectures, using REINFORCE (Williams, 1992;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Zoph and Le, 2017) or proximal policy optimization (Schulman et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Zoph et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' ENAS (Pham et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2018) follows a similar strategy but speeds up the reward estimation using weight sharing;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' we will discuss this in detail in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' More recently, RL has not been used prominently for NAS, since it has been shown to be outperformed in head-to-head comparisons by evolutionary methods (Real et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019) and Bayesian optimization (Ying et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019), which we will discuss next.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 12 Neural Architecture Search: Insights from 1000 Papers Algorithm 2 General Evolutionary NAS Algorithm Input: Search space A, number of iterations T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Randomly sample and train a population of architectures from the search space A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' for t = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' , T do Sample (based on accuracy) a set of parent architectures from the population.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Mutate the parent architectures to generate children architectures, and train them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Add the children to the population, and kill off the architectures that are the oldest (or have the lowest accuracy) among the current population.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' end for Output: Architecture from the population with the highest validation accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='3 Evolutionary and Genetic Algorithms Decades before the recent NAS resurgence, one of the first works in NAS used an evolution- ary algorithm (Miller et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 1989).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In other early works, it was common to use evolutionary algorithms to simultaneously optimize the neural architecture and its weights (Angeline et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 1994;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Floreano et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2008;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Stanley and Miikkulainen, 2002;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Stanley et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2009).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Today, evolutionary algorithms are still popular for the optimization of architectures due to their flexibility, conceptual simplicity, and competitive results (Real et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019), but the weight optimization is typically left to standard SGD-based approaches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Evolutionary NAS algorithms work by iteratively updating a population of architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In each step, one or more “parent” architectures in the population are sampled (typically based on the validation accuracy of the architectures), combined and mutated to create new “children” architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' These architectures are then trained and added to the population, replacing individuals in the population with worse performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' See Algorithm 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' There are many other ways in which evolutionary algorithms differ, including sampling the initial population, selecting the parents, and generating the children.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' For selecting the initial population, approaches include using trivial architectures (Real et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2017), randomly sampling architectures from the search space (Real et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Sun et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019), or using hand-picked high-performing architectures (Fujino et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Selecting parents from the population makes up one of the core components of the evolutionary algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Perhaps the most popular method to sample parents is tournament selection (Almalaq and Zhang, 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Goldberg and Deb, 1991;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Real et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2017, 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Sun et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019, 2020), which selects the best architecture(s) out of a randomly sampled population.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Other common approaches include random sampling weighted by fitness (Gibb et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Loni et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Song et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Xie and Yuille, 2017), or choosing the current best architecture(s) as parents (Elsken et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Suganuma et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2017, 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' These methods trade off exploration vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' exploiting the best region found so far.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' One particularly successful evolutionary algorithm is regularized evolution by Real et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' This is a fairly standard evolutionary method, with the novelty of dropping the architecture in each step that has been in the population for longest, even if it has the highest performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' This method outperformed random search and RL in a head-to-head comparison and achieved state-of-the-art performance on ImageNet at the time of its release (Real et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 13 White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter Algorithm 3 General Bayesian Optimization NAS Algorithm Input: Search space A, number of iterations T, acquisition function φ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Randomly sample and train a population of architectures from the search space A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' for t = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' , T do Train a surrogate model based on the current population.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Select architecture at by maximizing φ (a) , based on the surrogate model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Train architecture at and add it to the current population.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' end for Output: Architecture from the population with the highest validation accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='4 Bayesian Optimization Bayesian optimization (BO, see, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Frazier (2018) or Garnett (2023)) is a powerful method for optimizing expensive functions, and it has seen significant success within NAS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' There are two key components to BO: (1) building a probabilistic surrogate to model the unknown objective based on past observations, and (2) defining an acquisition function to balance the exploration and exploitation during the search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' BO is an iterative algorithm which works by selecting the architecture that maximizes the acquisition function (computed us- ing the surrogate), training this architecture, and retraining the surrogate using this new architecture to start the next iteration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' See Algorithm 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Initial BO-based NAS techniques developed custom distance metrics among architec- tures, for example, with a specialized architecture kernel (Swersky et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2014), an opti- mal transport-inspired distance function (Kandasamy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2018), or a tree-Wasserstein distance function (Nguyen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021), allowing a typical Gaussian process (GP) based surrogate with BO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' However, using a standard GP surrogate often does not perform well for NAS, as search spaces are typically high-dimensional, non-continuous, and graph-like.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' To overcome this, one line of work first encodes the architectures, using encodings discussed in Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='6, and then trains a model, such as a tree-Parzen estimator (Bergstra et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2011;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Falkner et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2018), random forest (Hutter et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2011;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Ying et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019), or neural network (Springenberg et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2016;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' White et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Another line of work projects architecture information into a low-dimensional continuous latent space on which conven- tional BO can be applied effectively (Ru et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020b;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Wan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2022a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Another class of surrogate models use graph neural networks (Ma et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Ru et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Shi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020) or a graph-based kernel (Ru et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021) to naturally handle the graph representation of architectures without the need for an explicit encoding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' The acquisition function, which trades off exploration and exploitation during the search, is another important design component for BO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' There are various types of acquisition func- tions used in NAS, such as expected improvement (Jones et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 1998;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Moˇckus, 1975), upper confidence bound (Cox and John, 1992;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Srinivas et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2010) and information-theoretic ones (Hennig and Schuler, 2012;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Hern´andez-Lobato et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2014;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Hvarfner et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Wang and Jegelka, 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In NAS, optimizing the acquisition function in each round of BO is chal- lenging due to the non-continuous search spaces, and furthermore, exhaustively evaluating acquisition function values on all possible architectures is computationally non-viable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' The most common method for optimizing the acquisition function in NAS is by randomly mu- tating a small pool of the best architectures queried so far, and of the mutated architectures, 14 Neural Architecture Search: Insights from 1000 Papers selecting the one(s) with the highest acquisition function value (Kandasamy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Ma et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Ru et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Schneider et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Shi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' White et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Other methods for optimizing the acqusition function include local search, evolutionary search, and random search (Ru et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Shi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Ying et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='5 Monte Carlo Tree Search Another class of NAS methods is based on Monte Carlo Tree Search (MCTS).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' MCTS is the key backbone search algorithm used in AlphaGO (Silver et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2016) and AlphaZero (Silver et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2017), which achieve super-human performance in Go and chess, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' MCTS finds optimal decisions by recursively sampling new decisions (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', making a move in chess, or selecting an operation for an architecture in NAS), running stochastic rollouts to obtain the reward (such as winning a chess game, or discovering a high-performing architecture) and then backpropagating to update the weight of the initial decision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Across iterations, the algorithm builds a decision tree to bias the search towards more promising regions by balancing exploration and exploitation in decision making (Browne et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2012).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' MCTS was first applied to NAS by Negrinho and Gordon (2017) who represented the search space and its hyperparameters using a modular language.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' This results in a tree- structured, extensible search space, contrary to the fixed search spaces of prior work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Wis- tuba (2018) introduced a similar method but with two different UCT (Upper Confidence bounds applied to Trees) algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' MCTS was first adapted to cell-based search spaces by using a state-action representation (Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' The authors also improved sample efficiency by using a neural network to estimate the accuracy of sampled architectures, thus enabling a higher number of rollouts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' This was followed up by adding further efficiency in pruning the tree by learning partitionings (Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020b), and by application to multi-objective NAS (Zhao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' One-Shot Techniques Throughout Section 3, we have seen that the predominant methodology in the early stages of NAS research was to iteratively sample architectures from the search space, train them, and use their performance to guide the search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' The main drawback of these methods, when applied without speedup techniques, is their immense computational cost, sometimes on the order of thousands of GPU days (Real et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Zoph and Le, 2017) due to the need to train thousands of architectures independently and from scratch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='3 As an alternative, one-shot techniques were introduced to avoid training each architec- ture from scratch, thus circumventing the associated computational burden.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' As of 2022, they are currently one of the most popular techniques in NAS research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Rather than train- ing each architecture from scratch, one-shot approaches implicitly train all architectures in the search space via a single (“one-shot”) training of a hypernetwork or supernetwork.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' A hypernetwork is a neural network which generates the weights of other neural net- works (Schmidhuber, 1992), while a supernetwork (often used synonymously with “one-shot 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' On the other hand, recent developments in performance estimation and speed-up techniques (Section 5) have significantly improved the computational overhead of methods that use black-box optimization as a base, making these methods affordable for many applications and users.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 15 White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter All operation candidates Supernet … Subnet Figure 5: A supernet comprises all possible architectures in the search space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Each archi- tecture is a subnetwork (subgraph) in the supernet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' model” in the literature) is an over-parameterized architecture that contains all possible ar- chitectures in the search space as subnetworks (see Figure 5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' The idea of a supernetwork was introduced by Saxena and Verbeek (2016) and was popularized in 2018 by works such as Bender et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2018), Pham et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2018), and Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2019c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Once a supernet is trained, each architecture from the search space can be evaluated by inheriting its weights from the corresponding subnet within the supernet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' The reason for the scalability and efficiency of supernets is that a linear increase in the number of candidate operations only causes a linear increase in computational costs for training, but the number of subnets in the supernet increases exponentially.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Therefore, supernets allow us to train an exponential number of architectures for a linear compute cost.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' A key assumption made in one-shot approaches is that when using the one-shot model to evaluate architectures, the ranking of architectures is relatively consistent with the ranking one would obtain from training them independently.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' The extent to which this assumption holds true has been substantially debated, with work showing evidence for (Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021c;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Pham et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Yu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020) and against (Pourchot et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Sciuto et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Zela et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020b;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020b) the claim across various settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' The validity of the assumption is dependent on the search space design, the techniques used to train the one- shot model, and the dataset itself, and it is hard to predict to what degree the assumption will hold in a particular case (Sciuto et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' While the supernet allows quick evaluation of all architectures, we must still decide on a search strategy, which can be as simple as running a black-box optimization algorithm while the supernet is training (such as in Pham et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2018)) or after the supernet is trained (such as in Bender et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2018)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' We discuss these families of techniques in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' A popular line of work uses gradient descent to optimize the architecture hyperparameters in tandem with training the supernet (such as DARTS (Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019c) and numerous subsequent methods).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' We discuss this family of techniques in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Finally, in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='3, we discuss hypernetworks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Figure 6 provides a taxonomy of one-shot families.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 16 Neural Architecture Search: Insights from 1000 Papers Hypernetwork Methods e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' SMASH, GHNN Non-Differentiable Optimization e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' OFA Supernetwork Methods e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' DARTS, OFA DARTS “fixes”: Operation Biases e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' DARTS-PT Rank Disorder e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' SGAS High Memory e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' PC-DARTS Poor Generalization e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Robust-DARTS Differentiable Optimization e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' DARTS One-Shot Methods Figure 6: A taxonomy of the predominant one-shot families.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' A hypernetwork is a neural net which generates the weights of other neural nets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' A supernetwork is an over- parameterized neural net that contains the set of neural nets from the search space as subnetworks, and it can be used with differentiable optimization (including DARTS and follow-ups), or non-differentiable optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='1 Non-Differentiable Supernet-Based Methods We start by describing supernet-based methods which do not make use of differentiable optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Some methods in this family decouple the supernet training and architecture search: first train a supernet, and then run a black-box optimization algorithm to search for the best architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Other methods train a supernet while simultaneously running a non-differentiable search algorithm, such as reinforcement learning, to select subnetworks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Bender et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2018), Li and Talwalkar (2019), and Guo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2020b) propose simple methods to train the supernet and then use a black-box optimization algorithm to extract the best architecture from it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Bender et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2018) construct the supernet by creating a separate node corresponding to an operation, in every place where there is a choice of operation;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' they then train the supernet as if it were a standard neural net, with one exception: nodes are randomly dropped during training, with the level of dropout increasing linearly throughout training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In follow-up work, Li and Talwalkar (2019) and Guo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2020b) take this idea a step further: in each training step, they randomly sample one architecture and only update the weights of the supernet corresponding to that architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' These techniques better mimic what is happening at evaluation time: only a subnetwork is evaluated rather than the entire supernet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Furthermore, these procedures use significantly less memory than training all the weights of a supernet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Each method concludes by using the trained supernet to quickly evaluate architectures when conducting random search (Bender et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Li and Talwalkar, 2019) or evolutionary search (Guo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' The architecture identified in the end is then trained from scratch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' As will be discussed in Section 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='2, deploying neural nets in practice often comes with constraints on latency or memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' While the supernets considered thus far tend to only contain architectures of approximately the same size, Cai et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2020) propose a supernet containing subnetworks of various sizes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' This Once-for-all (OFA) approach uses a progres- sive shrinking strategy which starts by sampling the largest subnetworks, and then moving 17 White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter Algorithm 4 DARTS - Differentiable Architecture Search Input: Search space A, number of iterations T, hyperparameter ξ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Randomly initialize a one-shot model based on A with weights w and architecture hy- perparameters α.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' for t = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' , T do Perform a gradient update on the architecture weights α according to Equation 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Perform a gradient update on w according to ∇wLtrain(w, α).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' end for Output: Derive the final architecture by taking the argmax of α, across all operation choices, and then retrain this architecture from scratch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' to smaller subnetworks, in order to minimize the co-adaptation among subnetworks and effectively train networks of different sizes “once for all”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In a subsequent search phase, architectures are selected based on different constraints on latency and memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' While Cai et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2020) uses random search for this search phase, Guo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2020b) proposed to improve this approach further by using evolutionary search in the search phase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' One of the earliest supernet-based approaches is ENAS (Efficient Neural Architecture Search) (Pham et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2018), which trains the supernet while running a search algorithm in tandem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Specifically, the search strategy is similar to the RL controller-based approach from Zoph and Le (2017) (described in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='2) but estimates the performance of each architecture using a supernet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' The training procedure alternates between selecting an archi- tecture, evaluating it, and updating the weights of the supernet, and updating the weights of the controller by sampling several architectures to estimate the reward of REINFORCE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' While this approach searches for an architecture in tandem with training the supernet, it uses a separate controller network to guide the search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In the next section, we discuss methods which conduct the search via gradient descent using only the supernet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='2 Differentiable Supernet-Based Methods In this section, we review supernet-based NAS methods that employ differentiable optimiza- tion techniques.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' We first describe the seminal DARTS (Differentiable Architecture Search) approach by Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2019c), and then we move to various follow-up works and other differentiable approaches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' The DARTS approach uses a continuous relaxation of the discrete architecture search space, which enables the use of gradient descent in order to find a high-performing local optimum significantly faster than black-box optimization methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' It can be applied to any DAG-based search space which has different choices of operations on each edge by using a “zero” operation to simulate the absence of an edge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' At the start, each edge (i, j) in the DARTS search space consists of multiple possible candidate operations o, each of which are associated with a continuous hyperparameter α(i,j) o ∈ [0, 1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' While the supernet is training, edge (i, j) consists of a mix of all candidate operations, weighted by each α(i,j) o .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' The architecture hyperparameters α are optimized jointly with the supernet model weights w via alternating gradient descent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In particular,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' in order to update the architecture weights α via gradient descent,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' DARTS makes use of 18 Neural Architecture Search: Insights from 1000 Papers Joint Optimization of Weights and Architecture Hyperparameters Operation candidates Output Input … x N Discretization Output Input … x N Randomly Initialized Architecture Hyperparameters Output Input … x N Re-training From Scratch … x > N Input Output Figure 7: Differentiable one-shot NAS algorithms have four main steps: randomly initializ- ing the architecture hyperparameters,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' optimizing the architecture hyperparame- ters and weights via alternating gradient descent,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' discretizing the optimized archi- tecture hyperparameters,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' and re-training the resulting subnetwork from scratch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' the following approximation: ∇αLval (w∗(α), α) ≈ ∇αLval (w − ξ∇wLtrain(w, α), α) , (1) where Ltrain denotes the training loss, Lval denotes the validation loss, ξ is the learning rate, and w∗(α) denotes the weights that minimize the training loss of the architecture corresponding to α.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In other words, in order to avoid the expensive inner optimization, w∗(α) is approximated by a single step of gradient descent (w − ξ∇wLtrain(w, α)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' This is similar to MAML (Finn et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2017) and other works (Luketina et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2016;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Metz et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Although this strategy is not guaranteed to converge, Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2019c) showed that it works well in practice with a suitable choice of ξ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' After the training phase, DARTS obtains a discrete architecture by selecting the operation with the maximum value of α on each edge (the discretization step) and then re-trains it from scratch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Figure 7 provides an illustration of DARTS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' DARTS gained significant attention in the AutoML community due to its simplicity, its novelty, and the release of easy-to-use code.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Furthermore, the original technique left room for improvement across various axes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Consequently, there has been a large body of follow-up work seeking to improve various parts of the DARTS approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In the rest of the section, we cover the main categories of improvements (see Figure 6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='1 Rank Disorder As mentioned at the start of Section 4, nearly all one-shot methods make a key assumption: the ranking of architectures evaluated with the supernet is relatively consistent with the ranking one would obtain from training them independently;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' when this assumption is not 19 VαLval (w*(α), α) ~ VαLval (w - $VwLtrain(w, α), α)(i,j) maxWhite, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter met, it is known as rank disorder (Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021c;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Sciuto et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' While there is considerable debate both for (Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021c;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Pham et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Yu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020) and against (Pourchot et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Sciuto et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Zela et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020b;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020b) the assumption, many works have attempted to reduce the problem of rank disorder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Several methods propose to gradually increase the network depth, or to gradually prune the set of operation candidates during training, showing that this causes the weights to better adapt to the most-promising operation choices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Progressive-DARTS (Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019a) gradually increases the network depth while simultaneously pruning the operations with the smallest weights.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' SGAS (Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020a) chooses operations throughout the train- ing procedure, based on two criteria: selection certainty (calculated via the entropy of the operation distribution) and selection stability (calculated via the movement of the operation distribution).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Finally, XNAS (Nayman et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019) makes use of the exponentiated gradi- ent algorithm (Kivinen and Warmuth, 1997), which dynamically prunes inferior operation choices during the search while also allowing the recovery of “late bloomers”, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', operation choices which only become accurate later in the training procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='2 Operation Biases Several works show that differentiable NAS techniques tend to favor skip connections over other operation choices (Liang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Zela et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020a), which might be caused by the supernet using skip connections to over-compensate for vanishing gradients (Chu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Various methods have been proposed to fix this bias.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' DARTS+ (Liang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019) proposes an early stopping method based on the stability of the ranking of the architecture weights, while DARTS− (Chu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021) separates the skip connection weights from other operation weights via auxiliary edges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' FairDARTS (Chu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020) sets all operation weights independent of all others, and then pushes these architecture weights toward zero or one in the loss function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Taking a different approach, Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2021) show that it is okay for skip connections to have higher weights, as long as we do not select the final architecture based on these weights.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Instead, after training the supernet, their algorithm, DARTS-PT, selects each operation whose removal has the largest decrease of accuracy in the supernet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Rather than fixing the biases among a small hand-picked set of operations, Shen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2022) instead use a search space that significantly reduces human bias: they fix a standard convolutional network and search for the kernel sizes and dilations of its operations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' This simple approach is broadly applicable across computer vision, PDE solving, protein folding, and other tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In order to make one-shot training more efficient, their algorithm, DASH, computes the mixture-of-operations using the Fourier diagonalization of convolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='3 Poor Test Generalization Several works seek to improve the generalization performance of DARTS through various means.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Zela et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2020a) and Chen and Hsieh (2020) show that DARTS often converges to sharp local minima in the loss landscape (high validation loss curvature in the architecture hyperparameter space), which, after running the discretization step, can cause the algo- rithm to return an architecture with poor test generalization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Robust-DARTS (Zela et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020a) fixes this issue by making the training more robust through data augmentation, L2 20 Neural Architecture Search: Insights from 1000 Papers regularization of the inner objective Ltrain, and early stopping.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Similarly, rather than op- timizing the training loss, Smooth-DARTS (Chen and Hsieh, 2020) optimizes the expected or worst-case training loss over a local neighborhood of the architecture hyperparameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Taking a different approach, GAEA (Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021c), XD (Roberts et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021), and StacNAS (Guilin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019) all use a single-level optimization rather than the typical bi-level optimization, by treating the architecture hyperparameters as normal architecture weights, showing this leads to better generalization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Furthermore, GAEA re-parameterizes the architecture parameters over the simplex and updates them using the exponentiated gradient algorithm (similar to XNAS from Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='1), showing this is better-suited to the underlying geometry of the architecture search space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Finally, Amended-DARTS (Bi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019) and iDARTS (Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021a) both take the approach of deriving more accurate approximations of the gradients of α (Equation 1), showing that this leads to a more stable optimization and better generalization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='4 High Memory Consumption The memory required to train a supernet is much higher than a normal neural net—it scales linearly with the size of the set of candidate operations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Recall from Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='1 that multiple works reduced this memory by, in each training step, masking out all operations except for the ones corresponding to one or a few subnetworks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Various works have proposed techniques to mask out operations for differentiable NAS as well, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', while simultaneously optimizing the architecture hyperparameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Cai et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2019) proposed ProxylessNAS, which solves this problem by modifying the BinaryConnect (Courbariaux et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2015) discretization method: in each training step, for each operation choice, all are masked out except one operation that is randomly chosen with probability proportional to its current value of α.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Cai et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2019) show that this procedure converges to a single high-performing subnetwork.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' GDAS (Dong and Yang, 2019) and DSNAS (Hu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Xie et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2018) use a Gumbel-softmax distribution over a one-hot encoding of the operation choices, which is a different way to allow sampling single operations in each training step while maintaining differentiability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' PC-DARTS (Xu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019b) proposes a relatively simpler approach: at each training step, and for each edge in the DAG, a subset of channels is sampled and sent through the possible operations, while the remaining channels are directly passed on to the output.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' While reducing memory due to training fewer channels, this also acts as a regularizer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' DrNAS (Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021f) also reduces memory consumption by progressively increasing the number of channels that are forwarded to the mixed operations, and progressively pruning operation choices, modeled by a Dirichlet distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='3 Hypernetworks A hypernetwork is a neural network which generates the weights of other neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Hypernetworks were first considered by Schmidhuber (1992, 1993), and the first modern application was by Ha et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2017), who used them to obtain better weights for a fixed LSTM architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Hypernetworks have since been used for a variety of tasks, including HPO (Mackay et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Navon et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021), calibrating model uncertainty (Krueger et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2017), and NAS (Brock et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 21 White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter The first work to use hypernetworks for NAS (and among the first to use a one-shot model for NAS) was SMASH (one-Shot Model Architecture Search through Hypernetworks) (Brock et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' SMASH consists of two phases: first, train a hypernetwork to output weights for any architecture in the search space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Next, randomly sample a large set of architectures, generate their weights using the hypernetwork, and output the one with the best validation accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' The hypernetwork, a convolutional neural net, takes as input an architecture encoding and outputs a set of weights for that architecture, and is trained by randomly sampling an architecture, generating its weights, computing its training error, and then backpropagating through the entire system (including the hypernetwork weights).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Another hypernet-based NAS algorithm is GHN (Graph Hypernetworks) (Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' The main difference between SMASH and GHN is the architecture encoding and the architecture of the hypernetwork.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Specifically, the GHN hypernetwork is a mix between a graph neural network and a standard hypernetwork.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' It takes as input the computational graph of an architecture a and uses message-passing operations which are typical in GNNs, to output the weights of a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' The training of the hypernetwork, and the final NAS algorithm, are both the same as in SMASH.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Speedup Techniques In this section, we cover general speedup techniques for NAS algorithms, including per- formance prediction (Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='1), multi-fidelity methods (Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='2), meta-learning ap- proaches (Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='3), and weight inheritance (Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='1 Performance Prediction A large body of work has been devoted to predicting the performance of neural networks before they are fully trained.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Such techniques have the potential to greatly speed up the runtime of NAS algorithms, since they remove the need to fully train each architecture under consideration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' These speedup techniques can improve nearly all types of NAS algorithms, from black-box optimization (Ru et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020a;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' White et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021c) to one-shot NAS (Xiang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In this section, we discuss the performance prediction techniques themselves, while in Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='2, we discuss methods of incorporating them into NAS algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Formally, given a search space A and architecture a ∈ A, denote the final validation accuracy obtained with a fixed training pipeline as f(a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' A performance predictor f′ is defined as any function which predicts the accuracy or relative accuracy of architectures, without fully training them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In other words, evaluating f′(a) takes less time than evaluating f(a) , and {f′(a) | a ∈ A} ideally has high correlation or rank correlation with {f(a) | a ∈ A} .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In the rest of this section, we give an overview of different types of performance predictors, including learning curve extrapolation (Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='1), zero-cost proxies (Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='2), and other methods (Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Note that surrogate models (Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='4) and one-shot models (Section 4) can also be seen as types of performance predictors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='1 Learning Curve Extrapolation Learning curve extrapolation methods seek to predict the final performance of a given architecture after partially training it,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' by extrapolating from its so-called partial learning 22 Neural Architecture Search: Insights from 1000 Papers Learning Curve Extrapolation Zero-Cost Proxies Subset Selection Data Weibull Log log linear Log power Janoschek Epochs Accuracy Figure 8: Illustration of the main types of performance predictors: extrapolating the vali- dation accuracy learning curve via a parameteric model (left),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' assessing the gen- eralizability of an architecture with a single forward pass of a single minibatch of data (middle),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' and training the architeture on a subset of the data (right).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' curve (the series of validation accuracies at all epochs so far).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' This can, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', be accomplished by fitting the partial learning curve to a parametric model (Domhan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2015) (see Figure 8 (left)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Learning curve extrapolation methods can also be used together with a surrogate model: in that case, the model takes as input both an encoding of a and a partial learning curve of a, and outputs a prediction f′(a) (Baker et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Klein et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Learning curve extrapolation methods can be used to speed up black-box NAS algorithms (Domhan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2015;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Ru et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020a;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Yan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021b) or in conjunction with multi- fidelity algorithms such as Hyperband or BOHB (described in Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='2 Zero-Cost Proxies Zero-cost proxies are a recently developed family of performance prediction techniques.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' The idea is to run a very fast computation (such as a single forward and backward pass of a single minibatch of data) over a set of architectures that assigns a score to each architecture, with the hope that the scores are correlated with the final accuracies (Mellor et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' These techniques get their “zero-cost” name since the overall time to score each architecture is negligible (often less than 5 seconds) compared to most other performance prediction techniques (Abdelfattah et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' While most zero-cost proxies compute architecture scores from a (single) minibatch of data, some are data-independent, computing the score solely from the initialized weights or number of parameters of the neural network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Zero-cost proxies were first introduced by Mellor et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2021), who estimated the relative performance of neural networks based on how well different linear regions of the network map are separated (see Figure 8 (middle)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Since the initial technique, several new zero- cost proxies have been introduced.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Abdelfattah et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2021) made a connection to the pruning-at-initialization literature (Lee et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019b;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Tanaka et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Theis et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020a) and used this connection to introduce five zero-cost proxies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Their best- performing method, synflow (Tanaka et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020), is a data-independent method which 23 White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter computes the L1 path-norm of the network: it computes the sum of the product of all initialized weights in each path connecting the input to the output.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Since then, two other data-independent methods have been introduced, based on a series of synthetic proxy tasks to test scale invariances and spatial information (Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021d), and based on approximating the neural network as a piecewise linear function (Lin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Other data-dependent methods make use of the neural tangent kernel (NTK) (Jacot et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2018), based on approximating its trace norm (Shu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021) or approximating its spectrum (Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021e).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Although zero-cost proxies have received significant attention since they were first in- troduced, recent work has shown that simple baselines such as “number of parameters” and “FLOPs” are surprisingly competitive with all leading techniques.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' The main downsides of using zero-cost proxies are that they may be unreliable, especially on larger search spaces (Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Ning et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' White et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' They also may have biases, such as preferring larger models (Ning et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021) or wide channels (Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2022), although the biases can be removed (Krishnakumar et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' On the other hand, recent work encourages the viewpoint that zero-cost proxies are “weak learners” which can be combined with other techniques, including other zero-cost proxies, to improve performance (Krishnakumar et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' White et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Initial work shows that zero-cost proxies can be successfully added to both Bayesian optimization- based NAS (Shen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' White et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021c) and one-shot NAS (Xiang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='3 Other Low-Fidelity Predictions Beside training for fewer epochs, other works give a low-fidelity estimate of the final accuracy by training on a subset of the training data (or a smaller, synthetically generated dataset).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' This is visualized in Figure 8 (right).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Multiple works have studied different subset selection algorithms, such as random sam- pling, entropy-based sampling (Na et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021), clustering via core-sets (Shim et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021), facility location (Prasad et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2022), and k-center (Na et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Prasad et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2022) introduce adaptive subset selection to NAS, in which the subset is updated throughout training in order to maximize validation accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Such et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2020) introduce generative teaching networks which use a small set of syn- thetic data to train neural networks much faster than using the original real training data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' The synthetic data is created using a data-generating network to match the accuracy of a network trained on real data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' A related method is synthetic petri dish (Rawal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020), which evaluates architecture motifs by placing them into a small neural network and then training them using a small synthetic dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' This latter method also explicitly optimizes the correlation between architecture rankings with the approximation and the full training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='2 Multi-Fidelity Algorithms While the previous section was devoted to methods of predicting the performance of neural networks, now we cover algorithms that use these methods to run NAS efficiently.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Formally, the objective function f : X −→ R, which is typically expensive to fully eval- uate, can be cheaply approximated by a lower-fidelity version ˆf(·, b) of f(·), parameterized by the fidelity parameter b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' When b = bmax, we retrieve the true function f(·) = ˆf(·, bmax).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 24 Neural Architecture Search: Insights from 1000 Papers This is a generalization of the definition from Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' The fidelity parameter can denote the number of training epochs, training data subset size, and it can make use of perfor- mance prediction techniques from the previous section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' One can even use multiple fidelity parameters at a time (Kandasamy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Zhou et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Next, we describe the optimization algorithms that exploit access to multi-fidelity function estimates ˆf(·, b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' SuccessiveHalving (SH) (Jamieson and Talwalkar, 2016) is one of the simplest multi- fidelity algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' It starts to train a large number of architectures, slowly killing off more and more architectures which are not promising based on lower fidelity evaluations, until only the most promising architectures are evaluated at the highest fidelity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' The fidelity thresholds and number of architectures to promote to higher fidelities are controlled by a hyperparameter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' A popular improvement to SH is Hyperband (HB) (Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2018), a multi-armed bandit strategy that repeatedly calls SH as a subroutine, using different values of the minimum budget for each call.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Therefore, HB hedges its bets against any single choice of the minimum budget.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' While SH and HB are purely based on (smart) random search, recent works have com- bined HB with both Bayesian optimization and evolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Bayesian optimization hyperband (BOHB) (Falkner et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Lindauer et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2022) works similarly to HB in its first iter- ation, and on later iterations it fits a probabilistic surrogate model for each fidelity in order to make informed sampling decisions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Similarly, DEHB (Mallik and Awad, 2021) combines differential evolution (Storn and Price, 1997) with HB, significantly improving the later iterations of HB.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' ASHA (Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020c) and ABOHB (Klein et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020) improve SH and BOHB further, respectively, by making use of massively parallel asynchronous computation and early stopping strategies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Finally, EcoNAS (Zhou et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020) proposes a hierarchi- cal evolutionary search method that partitions the search space into subsets and allocates increasing fidelities to the most promising architectures in each subset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='3 Meta-Learning A majority of NAS approaches consider solving a single task from scratch, ignoring previ- ously explored solutions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' However, this is in contrast to what both researchers and prac- titioners typically do.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Often, architectures are transferred across datasets and even across tasks, and on a new task, researchers typically start with a state-of-the-art solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' So, one might ask: why run NAS from scratch rather than re-using information from, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', pre- vious experiments?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' This question naturally leads to the idea of meta-learning or learning to learn (Hochreiter et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2001;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Schmidhuber, 1987;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Thrun and Pratt, 1998), which aims at improving a learning algorithm by leveraging information from past, related experiments (Hospedales et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Vanschoren, 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Wong et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2018) and Zimmer et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2021) employ meta-learning strategies in a more general automated machine learning setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Since the focus is not on NAS, they both solely consider a small set of candidate architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Wong et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2018), tasks are encoded in a similar fashion as word embeddings in NLP (Mikolov et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2013).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In contrast, Zimmer et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2021) simply warm-start their search based on previously well-preforming configurations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Lian et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2020) and Elsken et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2020) focus on few-shot learning: the problem of learning a new task with just a few data points for training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' The authors extend gradient- based, model-agnostic meta-learning approaches such as MAML (Finn et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2017) and 25 White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter REPTILE (Nichol et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2018) to not only meta-learning an initial set of weights for a fixed neural network architecture, but also to the architecture itself by incorporating a differentiable method such as DARTS (Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019c) into the meta-learning algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' The work by Lee et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2021) is neither restricted to few-shot learning nor to choosing architectures from a small set of candidates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Rather, they employ typical NAS search spaces such as the ones discussed in Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' The authors propose a novel set encoder to improve upon deep sets (Zaheer et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2017) and set transformers (Lee et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' A graph neural network-based decoder is employed to generate neural architectures given a set encoding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Additionally, a graph neural network is employed to encode generated architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' The architecture encoding in combination with the set encoding is then used to meta-learn a surrogate model to predict the performance of the architecture, dataset tuple.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Shala et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2022) extend the work by Lee et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2021) by employing the dataset and architecture encodings within a Bayesian optimization framework, resulting in a probabilistic surrogate predictor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' This further enables adapting the surrogate to datapoints seen at test time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='4 Weight Inheritance and Network Morphisms While black-box optimization-based NAS algorithms train each architecture from scratch, and one-shot methods train all architectures with the same set of weights, a line of work proposes an in-between solution: reuse the weights of trained architectures on similar un- trained architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' This idea is especially helpful for black-box optimization approaches that apply only small, sequential changes to architectures when generating a new candidate architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' For example, Real et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2017) propose to copy the weights of all layers that have not been affected by applied mutations from the parent architecture to its offspring.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' This idea has also been extended by the concept of network morphisms (Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2016;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Wei et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Network morphisms are operators acting on the space of neural network architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' They change the architecture of a neural network without changing the function they represent, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', given an arbitrary input, the output remains identical for the original architecture and the architecture having been modified by a network morphism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' This is typically achieved by properly initializing the modified architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Network mor- phisms have been employed in evolutionary algorithms (Elsken et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2017, 2019a;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Schorn et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Wistuba, 2019), reinforcement learning (Cai et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2018a,b), Bayesian opti- mization (Jin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019b), and even one-shot methods (Fang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Extensions The previous sections studied the main techniques from the classic instantiation of NAS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In this section, we survey a few common extensions: joint NAS + HPO, constrained/multi- objective NAS, and neural ensemble search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='1 Joint NAS + HPO While a large body of the NAS literature assumes fixed hyperparameters in their experimen- tal setup, it has been shown – perhaps not very surprisingly – that hyperparameters also play a significant role.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' For example, on the DARTS search space, tuning hyperparameters can lead to a huge improvement, exceeding the performance gains obtained by NAS (Yang 26 Neural Architecture Search: Insights from 1000 Papers et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' However, the best hyperparameters may vary significantly across architectures even in the same search space (Yang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Therefore, a recent body of work seeks to overcome these challenges and give efficient algorithms for NAS + HPO (Dai et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Dong et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Izquierdo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Zela et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Zhou et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Running joint NAS + HPO is significantly more challenging than running NAS or HPO in isolation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' First, the complexity of the search space is substantially increased, due to the increased number of hyperparameters and the heterogeneity of the hyperparameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Sec- ond, the interaction between architectures and training hyperparameters in terms of network performance is difficult to model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Furthermore, some hyperparameters can have different effects on the performance under different evaluation budgets, reducing the effectiveness of many multi-fidelity and performance prediction techniques.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In light of these challenges, several solutions have been proposed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Various methods have been introduced to homogenize the search space, such as reformulating NAS as an HPO problem with categorical hyperparameters (Zela et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2018), or standardizing the repre- sentation of the NAS and HPO hyperparameters by assigning continuous-valued coefficients in [0, 1] (Dong et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' The search strategies resemble standard NAS algorithms such as BO (Dai et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Izquierdo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Zela et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2018), evolution (Dai et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Izquierdo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021), or REINFORCE with weight sharing (Dong et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='2 Constrained and Multi-Objective NAS Although NAS has been very popular in recent years, most work focuses on solely optimizing for a single objective, typically the accuracy or error rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' However, there are many settings for which this is not sufficient, such as when the neural network must be deployed on an edge device or must satisfy a legal definition of fairness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In such applications, we may need to constrain the latency, memory usage, or rate of errors across classes (Sukthanker et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' There has been particular interest in constraints related to edge devices and other hardware, termed hardware-aware NAS (Benmeziane et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' To achieve one or more objectives in addition to accuracy, the standard NAS objective is typically modified to either a constrained optimization problem (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', Bender et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2020);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Cai et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2019);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Tan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2019)) or a multi-objective optimization problem (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', Elsken et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2019a);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Hu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2019);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Izquierdo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2021);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Lu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2019, 2020)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In constrained optimization, one tries to solve the following equation: min a∈A f(a) subject to hi(a) ≤ ci for i ∈ {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' , k} (2) where f(a) denotes, as before, the original objective function (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', validation error), and hi represent hardware constraints as a function of the architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' This problem is often solved by a transform into an additive or multiplicative unconstrained problem such as mina∈A f(a)+� i λigi(a) with penalty functions gi penalizing architectures not satisfying the constraints, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', gi(a) = max � 0, hi(a)−ci � and hyperparamters λi trading off the objectives and constraints.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' This single-objective optimization problem is then solved using black-box optimization methods or one-shot methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In the latter case, the penalty functions gi needs to be differentiable, which is often not the case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Therefore, discrete metrics such as latency are relaxed to continuous variables through various techniques, such as with a Gumbel softmax function (Wu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 27 White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter In multi-objective optimization, the requirements in Equation 2 are treated as separate objectives that are optimized along with the original objective: min a∈A � f(a), h1(a), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' , hk(a) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' While this can again be reduced to a single-objective problem via scalarization methods, another common approach is to search for a set of non-dominated solutions that are op- timal in the sense that one cannot reduce any objective without increasing at least one other objective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' The set of non-dominated solutions is called the Pareto front.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' The most common approach in this case is to employ multi-objective evolutionary algorithms which maintain a population of architectures and aim to improve the Pareto front obtained from the current population by evolving the current population (Elsken et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019a;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Hu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Izquierdo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Lu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Multi-objective evolutionary algorithms have also been used in combination with weight sharing within one-shot models (Lu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Mu˜noz et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' One of the most widely-studied constrained NAS problems is regarding hardware effi- ciency such as memory or latency, and many works have been devoted to efficiently approx- imating hardware metrics of interest.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' While simple metrics such as number of parameters are easily computed, these are often not correlated enough with other metrics of interest such as memory or latency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Other solutions include computing hardware costs modularly as the sum of the hardware cost of each operation (Cai et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019) or by using a surrogate model that predicts hardware costs (Dudziak et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Laube et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='3 Neural Ensemble Search While the goal of neural architecture search is to return the best standalone architecture, ensembling methods are popular within the deep learning community for their robust pre- dictions and their easy uncertainty quantification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' A newly emerging extension of NAS is concerned with finding the best ensemble of neural networks with diverse architectures, which can outperform standard NAS in terms of accuracy, uncertainty calibration, and robustness to dataset shift (Zaidi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Neural ensemble search is defined as follows: min a1,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=',aM∈ALval (Ensemble ((w∗(a1), a1), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' , (w∗(aM), aM))) (3) s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' w∗(a) = argminw Ltrain (w, a) ∀a ∈ A, where Ensemble is the function which aggregates the outputs of f1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' , fM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Note that the search space cardinality is |A|M rather than |A| as in standard NAS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Zaidi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2021) propose two simple yet effective procedures based on random search and regularized evolution (Real et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019) that search for architectures that optimize Equation 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Despite their effectiveness, these algorithms take considerable computation due to the black-box nature of the optimization algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Multi-headed NES (Narayanan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021) circumvents this issue by applying differentiable NAS methods on the heads of a multi-headed network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' The heads are explicitly tuned to optimize the ensemble loss together with a diversity component that encourages uncorrelated predictions coming from the individual heads.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Other works have set up neural ensemble search with a one-shot model for the entire architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' NESBS (Neural Ensemble Search via Bayesian Sampling) 28 Neural Architecture Search: Insights from 1000 Papers (Shu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2022) propose to use a supernet to estimate the ensemble performance of inde- pendently trained base learners and then use Bayesian sampling to find a high-performing ensemble.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' NADS (Neural Architecture Distribution Search) (Ardywibowo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020) follows a similar line by training a supernet to optimize an objective that is tailored to provide better uncertainty estimates and out-of-distribution detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2021b) run evolutionary search on the supernet to find a high-performing ensemble.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Applications Along with discovering improved architectures for well-known datasets, one of the primary goals of the field of NAS is to quickly and automatically find high-performing architectures for brand new datasets and tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Although the majority of the NAS literature focuses on image classification, there are numerous success stories for NAS applied to less well- known settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In this section, we discuss a few of these successes, including graph neural networks, generative adversarial networks, dense prediction, and transformers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='1 Graph Neural Networks Graph neural networks (GNNs) are designed to process data represented by graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Using NAS to design GNNs poses unique problems: the search space for GNNs is more complex than typical convolutional search spaces, and both NAS and GNNs are independently known for their large computational overhead.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Zhou et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2019) initiated a line of work applying NAS to GNNs by defining a new search space with GNN-specific operations and then using a reinforcement learning strategy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Follow-up work designed similar search spaces (Gao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020b;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' with specialized features such as meta-paths (Ding et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021b), edge features (Jiang and Balaprakash, 2020), or fast sampling operations (Gao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Overall, the main difference between NAS for GNNs and more standard NAS settings lies in the construction of the search space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' The main search strategies used by GNN NAS algorithms are typical NAS approaches: reinforcement learning (Gao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020b;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Zhao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020a;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Zhou et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019), one-shot methods (Ding et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021b;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Zhao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020b), and evolutionary algorithms (Jiang and Balaprakash, 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Nunes and Pappa, 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' For a detailed survey on NAS for GNNs, see Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2021b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='2 Generative Adversarial Network Generative adversarial networks (GANs) (Goodfellow et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2014) are a popular choice for generative modeling in tasks such as computer vision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' GANs make use of two separate networks training in tandem: a generator and a discriminator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Due to having two separate networks, and their notoriously brittle training dynamics (Gulrajani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2017), GANs require special techniques for effective NAS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Different works have achieved improved performance via NAS by searching for only the generator architecture with a fixed discriminator (Doveh and Giryes, 2021), with a predefined progressively growing discriminator (Fu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020), or by searching both the generator and discriminator architectures simultaneously (Gong et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' The most popular choice of search space is the cell-based search space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' The cell for the generator 29 White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter consists of a standard convolutional cell, with the addition of various upsampling operations (Ganepola and Wirasingha, 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Gong et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Tian et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' The search techniques resemble the techniques used for standard NAS: reinforcement learning (Fu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Tian et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Wang and Huan, 2019), one-shot NAS (Doveh and Giryes, 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Gao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020a;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Lutz et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2018), and evolutionary algorithms (Kobayashi and Nagao, 2020), with scoring based on either Inception Score (IS) (Salimans et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2016) or Fr´echet Inception Distance (FID) (Heusel et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' For a comprehensive survey on NAS for GANs, see Ganepola and Wirasingha (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='3 Dense Prediction Tasks Dense prediction for computer vision encompasses a variety of popular tasks such as seman- tic segmentation, object detection, optical flow, and disparity estimation, and it requires more complex architectures compared to standard image classification problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' For ex- ample, the architectures often include a decoder (Ronneberger et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2015), modules for generating multi-scale features (He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2015) or task-specific heads (Girshick et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2014) in addition to the main network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Thus, NAS algorithms have been applied to search for these components, either in isolation (Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Ghiasi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Xu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019a) or jointly (Guo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020a;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Yao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020), or by discovering novel design patterns (Du et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' For a survey on NAS for dense prediction, see Elsken et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Once again, standard NAS techniques are used: Guo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2020a);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2019a);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Saikia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2019);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Xu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2019a) employ gradient-based search via DARTS (Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019c);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Du et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2020);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Ghiasi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2019) use RL;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Bender et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2020) is inspired by ProxylessNAS (Cai et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019) and ENAS (Pham et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Methods for dense prediction tasks (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', Bender et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2020);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2019b);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Guo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2020a);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Shaw et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2019);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Wu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2019a)) typically build search spaces based on state-of-the-art image classification networks, with task-specific components from well-performing dense prediction architecture components.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' As many approaches fix the backbone and only search for other task-specific components of the architecture, they often employ pre-trained backbone architectures (Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Guo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020a) or even cache the features generated by a backbone (Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Nekrasov et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020c) to speed up architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2018);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Ghiasi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2019) also use a down-scaled or different backbone architecture during the search process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Methods also sometimes employ multiple search stages, with the goal of first eliminating poorly performing architectures (or parts of the search space) and successively improving the remaining architectures (Du et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Guo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Overall, while it is much harder to run NAS on dense prediction tasks compared to image classification tasks because of the computational demands of dense prediction, there has been a rapid increase in developments with the rise of computationally efficient one-shot NAS methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' While efforts thus far have focused on semantic segmentation and object detection, avenues for future work include disparity estimation, panoptic segmentation, 3D detection and segmentation, and optical flow estimation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 30 Neural Architecture Search: Insights from 1000 Papers 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='4 Transformers Transformers were proposed by Vaswani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2017) to help with the issue of longer se- quences that RNNs had difficulty modeling, by using self-attention and cross-attention mechanisms such that each token’s representation in an input sequence is computed from a weighted average of the representation of all other tokens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' The core transformer design was introduced for machine translation, but it has found widespread usage in causal language modeling (Brown et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Radford et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019), masked language modeling (Clark et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Devlin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019d), and more recently, computer vision (Dosovitskiy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Since its release, there have been many efforts to improve transformers via NAS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' The most common search strategies for transformers are evolutionary (Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021c;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' So et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019, 2021) or one-shot (Ding et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021a;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Gong et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021a;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Su et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021) On the other hand, there is a huge variety of different search spaces that have been tried recently, relative to other areas (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', in NAS for convolutional architectures, the majority of works use cell-based search spaces).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Overall, the field of NAS for transformers has not converged to one “best” type of search space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Below, we survey NAS methods for four types of transformers: decoder-only, encoder-only, encoder-decoder, and vision transformers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' See Chitty-Venkata et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2022) for an in-depth survey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Decoder-only architectures, such as the GPT line of architectures (Brown et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Radford et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019) directly consume the input text prompt and output the sequence of text tokens that are most likely to follow.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Primer (So et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021) is a NAS algorithm that makes use of evolutionary search on a large macro decoder-only search space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' The approach found two consistent improvements to the transformer block: squaring the ReLU in the feedforward block in the transformer layer, and adding depthwise convolutions after self-attention heads.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Encoder-only architectures, such as BERT (Devlin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019) encode the input text into a representation which can be used for many kinds of downstream tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Multiple works (Xu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021a, 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Yin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021) seek to discover compressed versions of BERT, in which the desired latency and task are specified by the user.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' The typical approach is to train a supernet on a standard self-supervised task (masked language modeling), which can then be used to discover compressed models for a given language task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Encoder-decoder architectures such as T5 (Raffel et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020) are used in sequence- to-sequence tasks such as machine translation, in which the source language is encoded into a representation, which is then decoded into the target language.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' So et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2019) use evolutionary search together with a new technique to dynamically allocate more resources to more promising candidate models, while Zhao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2021b) propose a DARTS-based algorithm with a new technique for memory efficiency in backpropagation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Finally, KNAS (Xu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021b) and SemiNAS (Luo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020) speed up the search using zero-cost proxies and a surrogate transformer model, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' A large variety of NAS algorithms have been studied for vision transformer search spaces, with the majority using one-shot methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' AutoFormer (Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021c) searches over vision transformer architectures and hyperparameters using a single-path-one-shot strategy (Guo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020b) and then running evolutionary search on the trained supernet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' A followup work, AutoFormerv2 (Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021d), automated the design of the search space itself by gradually evolving different search dimensions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Other works have improved 31 White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter supernet training via gradient conflict aware training (Gong et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021) or channel-aware training (Su et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Finally, Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2021a) and Ding et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2021a) run one-shot methods on hybrid CNN and transformer search spaces for computer vision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Benchmarks In the early days of NAS research, the most popular metrics were the final test accuracies on CIFAR-10 and ImageNet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' This caused inconsistent search spaces and training pipelines across papers, and also drove up computational costs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' For example, it became standard to train the final architecture for 600 epochs, even though the test accuracy only increases by a fraction of a percent past 200 epochs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Recently, queryable NAS benchmarks have helped the field reduce computation when developing NAS techniques and to achieve fair, statistically significant comparisons between methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' A NAS benchmark (Lindauer and Hutter, 2020) is defined as a dataset with a fixed train-test split, a search space, and a fixed evaluation pipeline for training the architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' A tabular NAS benchmark is one that additionally gives precomputed evaluations for all possible architectures in the search space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' A surrogate NAS benchmark is a NAS benchmark along with a surrogate model that can be used to predict the performance of any architecture in the search space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' A NAS benchmark is queryable if it is either a tabular or a surrogate benchmark.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Queryable NAS benchmarks can be used to efficiently simulate many NAS experiments using only a CPU, by querying the performance of neural networks from the benchmark, rather than training them from scratch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In the rest of the section, we give an overview of popular NAS benchmarks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' See Appendix Table 2 for a summary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' The first tabular NAS benchmark was NAS-Bench-101 (Ying et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' It consists of a cell-based search space of 423 624 architectures, each with precomputed validation and test accuracies on CIFAR-10 for three different seeds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' A follow-up work, NAS-Bench- 1Shot1 (Zela et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020b), is able to simulate one-shot algorithms by defining subsets of the NAS-Bench-101 search space which have a fixed number of nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' NAS-Bench-201 (Dong and Yang, 2020) is another popular tabular NAS benchmark, consisting of 6466 unique architectures, each with precomputed validation and test accuracies on CIFAR-10, CIFAR- 100, and ImageNet-16-120 for three seeds each.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' NATS-Bench (Dong et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021b) is an extension of NAS-Bench-201 which also includes a macro search space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Another extension, HW-NAS-Bench-201 (Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021b), gives the measured or estimated hardware cost for all architectures across six hardware devices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Surr-NAS-Bench-DARTS (formerly called NAS-Bench-301) (Siems et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020) was the first surrogate NAS benchmark, created by training 60 000 architecture from the DARTS (Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019c) search space on CIFAR-10 and then training a surrogate model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' The authors also released Surr-NAS-Bench-FBNet for the FBNet search space (Wu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' A follow-up work, NAS-Bench-x11 (Yan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021b), devised a technique to predict the full learning curve, allowing the validation accuracies to be queried at arbitrary epochs, which is necessary for simulating multi-fidelity NAS algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' TransNAS-Bench-101 (Duan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021) is a tabular benchmark that covers seven dif- ferent computer vision tasks from the Taskonomy dataset (Zamir et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Beyond computer vision, NAS-Bench-NLP (Klyuchnikov et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2022) consists of an LSTM-inspired search space for NLP, and NAS-Bench-ASR (Mehrotra et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021) is a tabular NAS bench- 32 Neural Architecture Search: Insights from 1000 Papers mark for automatic speech recognition (Garofolo, 1993).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' NAS-Bench-360 (Tu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2022a) is a benchmark suite which gives NAS benchmarks on ten diverse problems such as pros- thetics control, PDE solving, protein folding, and astronomy imaging, and is search space agnostic, although three of the tasks have pretrained architectures on the NAS-Bench-201 search space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Finally, NAS-Bench-Suite (Mehta et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2022) is a benchmark suite which combines the majority of existing queryable NAS benchmarks, 28 total tasks, into a single unified interface.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' An extension, NAS-Bench-Suite-Zero, offers precomputed zero-cost proxy values across all tasks (Krishnakumar et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Using queryable benchmarks allows researchers to easily simulate hundreds of trials of the algorithms with different initial random seeds, making it easy to report statistically significant comparisons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' However, over-reliance on a few benchmarks can lead to the field over-fitting (Koch et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Raji et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021) and is not conducive to the discovery of truly novel methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Therefore, researchers should use a large set of diverse NAS benchmarks whenever possible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Best Practices The field of NAS has at times seen problems with reproducibility and fair, statistically significant comparisons among methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' These issues impede the overall research progress in the field of NAS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Recently, a few papers have laid out best practices and guidelines for conducting sound NAS research that is reproducible and makes fair comparisons (Li and Talwalkar, 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Lindauer and Hutter, 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Yang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' These best practices are also available as a checklist (Lindauer and Hutter, 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' We encourage NAS researchers to follow the checklist and to attach it to the appendix of their papers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Now, we summarize these best practices for NAS research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='1 Releasing Code and Important Details It is nearly impossible to reproduce NAS methods without the full code.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Even then, random seeds should be specified and reported.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Furthermore, releasing easy-to-use code can lead to more follow-up methods and impact.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' For example, Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2019c) released easy-to-use code for DARTS, which facilitated numerous follow-up works.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' When releasing code, it is important to release all components, including the training pipeline(s), search space, hyperparameters, random seeds, and the NAS method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Many papers use different architecture training pipelines during the search and during the final evaluation, so it is important to include both.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Note that using popular NAS benchmarks such as NAS-Bench-101 or NAS-Bench-201 (see Section 8) makes this substantially easier: the training pipeline is already fixed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' NAS methods often have several moving parts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' As a result, they typically have many hyperparameters of their own that could be tuned.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In fact, many NAS methods themselves make use of neural networks – one could even run a NAS algorithm on the NAS algorithm!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Due to this complexity, it is important to report if, or how, these hyperparameters were tuned.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' When reporting results on a large set of search spaces and datasets, the best practice is to tune the hyperparameters of the NAS method on one dataset, and then fix these hyperparameters for the remaining evaluations on other datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' We also note that, in general, devising NAS methods with fewer hyperparameters is more desirable, especially 33 White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter because it has recently been shown that hyperparameters often do not transfer well across datasets and search spaces (Mehta et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='2 Comparing NAS Methods When comparing NAS methods, it is not enough to use the same datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' The exact same NAS benchmarks must be used: a dataset with a fixed train-test split, search space, and evaluation pipeline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Otherwise, it is unclear whether a difference in performance is due to the NAS algorithm or the training pipeline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Several papers have shown that simple baselines are competitive with state-of-the-art NAS algorithms (Li and Talwalkar, 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Ottelander et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Sciuto et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' White et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' When desigining a new method for NAS, it is important to compare the method with baselines such as random sampling and random search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Furthermore, many NAS methods are anytime algorithms: a time budget does not necessarily need to be spec- ified upfront, and the method can be stopped at any time, returning the best architecture found so far.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' The longer the NAS method runs, the better the final result.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' These NAS methods should be compared on a plot of performance over time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Even one-shot algorithms can be compared in this way, since the supernet can be discretized and trained at any point.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' We recommend that NAS researchers run thorough ablation studies to show which part(s) of the NAS method lead to the most improved performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' As mentioned in the previous section, NAS methods often have several moving parts, so a clean understanding of the importance of each part and how they work together, is important to report.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Finally, we recommend that researchers run multiple trials of their experiments and report the random seeds for each experiment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' NAS methods can have high variance in the randomness of the algorithm, so running many trials is important to verify statistically significant comparisons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Resources In this section, we discuss NAS resources including libraries (Section 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='1), other survey papers (Section 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='2), and additional resources (Section 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='1 Libraries A long line of engineering has been focused on automating machine learning pipelines: Auto- WEKA (Thornton et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2013), Auto-Sklearn (Feurer et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2015), TPOT (Olson et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2016), and AutoGluon-Tabular (Erickson et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' More recently, a special focus has been given to developing tools that can facilitate the deployment of various NAS algorithms for practitioners, such as Auto-Keras (Jin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019a), Auto-PyTorch Tabular (Zimmer et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021), AutoGluon (Erickson et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020), and NNI (Microsoft, 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' To provide a toolbox for facilitating NAS research, in both developing new NAS meth- ods and applying NAS to new problem domains, various libraries have been proposed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' The DeepArchitect library (Negrinho and Gordon, 2017), which separates the search space from the optimizer, was an important first step towards this direction in the NAS community.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' NASLib (Ruchte et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020) unifies and simplifies NAS research by having a single ab- straction for one-shot and BBO algorithms, and a single abstraction for the search spaces of nearly all queryable NAS benchmark.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Archai (Hu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2019) also provides unified ab- 34 Neural Architecture Search: Insights from 1000 Papers stractions for one-shot and discrete NAS algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' The aim for Archai is both to support reproducible rapid prototyping for NAS research as well as to be a turnkey solution for data scientists looking to try NAS on their tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' PyGlove (Peng et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020) introduced a novel approach to constructing NAS methods via symbolic programming, in which the ML programs are mutable and can be manipulated and processed by other programs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='2 Other NAS Survey Papers There are several older NAS survey papers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Elsken et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2019b) provides a compact introduction to NAS and introduces the “three pillars” of NAS: search space, search strategy, and performance evaluation strategy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' The survey by Wistuba et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2019) provides a more comprehensive view of the landscape of NAS research, unifying and categorizing existing methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Ren et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2020) gave a layout that focused on the historical challenges in the field of NAS, as well as the solutions found to remedy these challenges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Other surveys have been released which focus on a specific sub-area of NAS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2021a) focus on evolutionary NAS, Benmeziane et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2021) focus on hardware-aware NAS (HW-NAS), Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2021b) survey AutoML (with a NAS focus) on graphs, Elsken et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2022) survey NAS for dense prediction in computer vision, and Xie et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2021), Santra et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2021), and Cha et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2022) all survey one-shot NAS methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Finally, there are more survey papers with a broader focus such as automated machine learning (AutoML) or automated deep learning (AutoDL), which devote a section to NAS (Dong et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021a;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Kedziora et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Yao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Yu and Zhu, 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Notably, the first book on automated machine learning (which is open-access) was released in May 2019 by Hutter et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='3 Additional Resources There are multiple long-running workshops which focus on NAS and related topics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' The AutoML workshop at ICML (2014-2021) and Meta-Learning workshop at NeurIPS (2017- 2022) have had a healthy overlap in attendance with the NAS community, especially over the last few years, while ICLR (2020, 2021) and CVPR (2021) have had workshops devoted solely to NAS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Finally, after many years of AutoML and NAS workshops, the community has grown large enough to start the first AutoML conference: https://automl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='cc/.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' For a continuously updated, searchable list of NAS papers, see https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='automl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' org/automl/literature-on-neural-architecture-search/.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' For a continuously updated list of NAS papers published at ML venues, as well as other resources, see https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' com/D-X-Y/Awesome-AutoDL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Future Directions Neural architecture search has come a long way in the last few years.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' The efficiency of NAS algorithms has improved by orders of magnitude, tools exist to compare NAS algorithms without GPUs, and researchers have created many novel techniques and diverse search spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Architectures discovered by NAS constitute the state of the art on many tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' However, there are still many unsolved problems and promising future directions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In this section, we discuss a few of the most important directions for future work in NAS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 35 White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='1 Robustness of Efficient Methods One-shot methods are one of the most popular techniques for NAS due to their orders-of- magnitude speedups over to black-box optimization techniques.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' While one-shot techniques have already seen major progress, they still face performance issues.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Even though many improvements of one-shot algorithms such as DARTS have been proposed (see Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='2), these works generally focus on a single improvement;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' the field lacks a large-scale, fair comparison among one-shot methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Furthermore, as it currently stands, applying one-shot methods to a new task requires a significant amount of expertise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Devising one-shot approaches that work robustly and reliably across new datasets and tasks is an important area for future study.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Another more recent set of techniques that promises orders-of-magnitude speedups are zero-cost proxies (see Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Although recent work has shown that many zero-cost proxies do not consistently outperform simple baselines (Ning et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021), other work ar- gues that there is untapped potential for zero-cost proxies (White et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2022), especially when combined with existing NAS techniques (White et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021c;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Xiang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' De- veloping a better understanding of when and why zero-cost proxies work in certain settings is an important area for future research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='2 Going Beyond Hand-Crafted, Rigid Search Spaces The search spaces for NAS methods are typically carefully hand-designed by human experts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' While carefully designing search spaces decreases search times, it also contradicts the idea of having an automated system that can be employed by non-experts, and it limits the scope of NAS to domains where strong search spaces are available.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Furthermore, in the last few years, the most-studied type of search space by far has been the cell-based search space, which is significantly more rigid than other types of search spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Hierarchical search spaces offer a better trade-off between flexibility and ease of search, yet they are relatively under-explored when compared to cell-based search spaces (see Sec- tion 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Furthermore, hierarchical search spaces by nature have a higher diversity when compared to cell-based search spaces, reducing the overall human bias of the search space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Optimizing search spaces in an automated manner (Ru et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020b) such as starting with large, diverse search spaces and then iteratively pruning low-performing parts of the space (Guo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020a;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Radosavovic et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020) could allow researchers to consider a significantly larger variety of architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='3 Fully Automated Deep Learning Although NAS has seen a huge amount of interest, recent work has shown that on popular search spaces such as the DARTS search space, optimizing the training hyperparameters leads to a greater increase in performance than optimizing the architecture (Yang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Zela et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2020b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' While these results show that for some search spaces, optimizing hyperparameters may be more important than optimizing the architecture, the best case scenario is to optimize both hyperparameters and the architecture simultaneously.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' A new thread of research seeks to simultaneously optimize the hyperparameters and architecture: NAS + HPO (see Section 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Varying hyperparameters along with the 36 Neural Architecture Search: Insights from 1000 Papers architecture also significantly reduces human bias, making it possible to discover previously unknown combinations of architectures and hyperparameters that substantially outperform existing methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Therefore, while this problem is significantly more challenging than NAS or HPO alone, the potential improvements are much higher.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Furthermore, we do not need to stop just at NAS + HPO: we can optimize the full deep learning pipeline, including problem formulation, data processing, data augmentation, model deployment, and continuous monitoring.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In other words, the goal is to run fully auto- mated deep learning (AutoDL) (Dong et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 2021a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' As the field of NAS matures, AutoDL has the potential to play a big role in realizing substantial improvements in performance for real-world problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Acknowledgments and Disclosure of Funding This research was partially supported by TAILOR, a project funded by EU Horizon 2020 research and innovation programme under GA No 952215.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' We acknowledge funding by European Research Council (ERC) Consolidator Grant “Deep Learning 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='0” (grant no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 101045765).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Funded by the European Union.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the ERC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Neither the European Union nor the ERC can be held responsible for them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 37 Fundedby the European UnionWhite, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter Operation Layer/Unit/Primitive 3x3 depthwise- separable convolution Inverted bottleneck residual layer 1x1 convolution Block/Module Architecture Cell Motif hi+1 hi-1 hi Input Block/Cell 1 Output Block/Cell 2 Block/Cell K .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' hi hi+1 Figure 9: NAS search space terminology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Operation layers/units/primitives consist of sets of 1-3 operations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' A block/module denotes a sequential stack of layers in chain- structured or macro search spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' A cell denotes a directed acyclic graph of operations (and a motif denotes a small subset of the cell).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Architecture Input CNN Block Output CNN Block CNN Block CNN Block CNN Block hi op.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' layer op.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' layer op.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' layer op.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' layer hi+1 Block Depth Nlayers Expansion Ratio Kernel size Ratio Chain-Structured Search Space Where to Doubling Channels Macro Search Space Architecture Depth Nblocks Architecture Input Output op.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' layer op.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' layer op.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' layer op.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' layer op.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' layer op.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' layer op.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' layer op.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' layer Where to Down- sampling Figure 10: Illustration of macro search space based on Borsos et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2019)(left) and chain- structured search space based on Cai et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2020)(right).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Additional Figures and Tables For a visualization of the search space terminologies, see Figure 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Figure 10, we show chain-structured and macro search spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Architecture encodings are illustrated in Figure 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Finally, for an overview of NAS benchmarks, see Table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 38 Neural Architecture Search: Insights from 1000 Papers in 1x1 out 3x3 in 1x1 out in out MP 1x1 in out 3x3 MP 1x1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' One-hot 6 4 1 47 Categorical (a) (c) (b) in 1x1 out 3x3 3x3 MP 1x1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' + .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 3x3 MP 1x1 1x1 3 2 1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 21 in MP 3x3 1x1 3x3 1x1 out in MP 3x3 1x1 3x3 1x1 out 9 + .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 3x3 MP 1x1 1x1 One-hot Categorical Figure 11: A neural architecture (a) can be encoded using an adjacency matrix (b) or path-based representation (c), with a one-hot or categorical encoding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Queryable Benchmark Size Type Tab.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Surr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' LCs One-Shot Task #Tasks NAS-Bench-101 423k cell \x13 Image class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 1 NATS-Bench-TSS (NAS-Bench-201) 6k cell \x13 \x13 \x13 Image class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 3 NATS-Bench-SSS 32k macro \x13 \x13 \x13 Image class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 3 NAS-Bench-NLP > 1053 cell \x13 NLP 1 NAS-Bench-1Shot1 364k cell \x13 \x13 Image class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 1 Surr-NAS-Bench-DARTS (NAS-Bench-301) 1018 cell \x13 \x13 Image class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 1 Surr-NAS-Bench-FBNet 1021 chain \x13 Image class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 1 NAS-Bench-ASR 8k cell \x13 \x13 ASR 1 TransNAS-Bench-101-Micro 4k cell \x13 \x13 \x13 Var.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' CV 7 TransNAS-Bench-101-Macro 3k macro \x13 \x13 \x13 Var.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' CV 7 NAS-Bench-111 423k cell \x13 \x13 Image class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 1 NAS-Bench-311 1018 cell \x13 \x13 \x13 Image class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 1 NAS-Bench-NLP11 > 1053 cell \x13 \x13 NLP 1 NAS-Bench-MR 1023 cell \x13 \x13 Var.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' CV 9 NAS-Bench-Macro 6k macro \x13 \x13 Image class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 1 HW-NAS-Bench-201 6k cell \x13 Image class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 3 HW-NAS-Bench-FBNet 1021 chain \x13 Image class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 1 NAS-Bench-360 Var.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' suite \x13 \x13 \x13 Var.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 3 NAS-Bench-Suite Var.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' suite \x13 \x13 \x13 \x13 Var.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 25 NAS-Bench-Suite-Zero Var.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' suite \x13 \x13 \x13 \x13 Var.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 28 Table 2: An overview of NAS benchmarks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 39 White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter References Mohamed S Abdelfattah, Abhinav Mehrotra, �Lukasz Dudziak, and Nicholas Donald Lane.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Zero-cost proxies for lightweight {nas}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Abdulaziz Almalaq and Jun Jason Zhang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Evolutionary deep learning-based energy con- sumption prediction for buildings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' ieee access, 7:1520–1531, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Peter J Angeline, Gregory M Saunders, and Jordan B Pollack.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' An evolutionary algorithm that constructs recurrent neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' IEEE transactions on Neural Networks, 5(1): 54–65, 1994.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Randy Ardywibowo, Shahin Boluki, Xinyu Gong, Zhangyang Wang, and Xiaoning Qian.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Nads: Neural architecture distribution search for uncertainty awareness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the International Conference on Machine Learning (ICML), pages 356–366.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' PMLR, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Neural machine translation by jointly learning to align and translate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Proceedings of the International Conference on Learning Representations (ICLR), 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' arXiv preprint arXiv:1409.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='0473.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Bowen Baker, Otkrist Gupta, Nikhil Naik, and Ramesh Raskar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Designing neural network architectures using reinforcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Bowen Baker, Otkrist Gupta, Ramesh Raskar, and Nikhil Naik.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Accelerating neural archi- tecture search using performance prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Meta-Learning Workshop at NeurIPS, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Gabriel Bender, Pieter-Jan Kindermans, Barret Zoph, Vijay Vasudevan, and Quoc Le.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Understanding and simplifying one-shot architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the Inter- national Conference on Machine Learning (ICML), 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Gabriel Bender, Hanxiao Liu, Bo Chen, Grace Chu, Shuyang Cheng, Pieter-Jan Kinder- mans, and Quoc V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Le.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Can weight sharing outperform random architecture search?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' an investigation with tunas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Hadjer Benmeziane, Kaoutar El Maghraoui, Hamza Ouarnoughi, Smail Niar, Martin Wis- tuba, and Naigang Wang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' A Comprehensive Survey on Hardware-Aware Neural Architec- ture Search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' PhD thesis, LAMIH, Universit´e Polytechnique des Hauts-de-France, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' James S Bergstra, R´emi Bardenet, Yoshua Bengio, and Bal´azs K´egl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Algorithms for hyper- parameter optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Kaifeng Bi, Changping Hu, Lingxi Xie, Xin Chen, Longhui Wei, and Qi Tian.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Stabilizing darts with amended gradient estimation on architectural parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' arXiv preprint arXiv:1910.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='11831, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 40 Neural Architecture Search: Insights from 1000 Papers Zal´an Borsos, Andrey Khorlin, and Andrea Gesmundo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Transfer nas: Knowledge trans- fer between search spaces with transformer agents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 6th ICML Workshop on Automated Machine Learning, arXiv preprint arXiv:1906.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='08102, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Andrew Brock, Theo Lim, JM Ritchie, and Nick Weston.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Smash: One-shot model archi- tecture search through hypernetworks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Language models are few-shot learners.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), 33:1877–1901, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Cameron B Browne, Edward Powley, Daniel Whitehouse, Simon M Lucas, Peter I Cowling, Philipp Rohlfshagen, Stephen Tavener, Diego Perez, Spyridon Samothrakis, and Simon Colton.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' A survey of monte carlo tree search methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' IEEE Transactions on Computa- tional Intelligence and AI in games, 4(1):1–43, 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Han Cai, Tianyao Chen, Weinan Zhang, Yong Yu, and Jun Wang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Efficient architecture search by network transformation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2018a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Han Cai, Jiacheng Yang, Weinan Zhang, Song Han, and Yong Yu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Path-Level Network Transformation for Efficient Architecture Search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the International Conference on Machine Learning (ICML), 2018b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Han Cai, Ligeng Zhu, and Song Han.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Proxylessnas: Direct neural architecture search on target task and hardware.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Proceedings of the International Conference on Learning Representations (ICLR), 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Han Cai, Chuang Gan, Tianzhe Wang, Zhekai Zhang, and Song Han.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Once-for-all: Train one network and specialize it for efficient deployment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Stephen Cha, Taehyeon Kim, Hayeon Lee, and Se-Young Yun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Supernet in neural architec- ture search: A taxonomic survey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' arXiv preprint arXiv:2204.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='03916, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' William Chan, Navdeep Jaitly, Quoc Le, and Oriol Vinyals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Listen, attend and spell: A neural network for large vocabulary conversational speech recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In 2016 IEEE in- ternational conference on acoustics, speech and signal processing (ICASSP), pages 4960– 4964.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' IEEE, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Bo Chen, Golnaz Ghiasi, Hanxiao Liu, Tsung-Yi Lin, Dmitry Kalenichenko, Hartwig Adam, and Quoc V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Le.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Mnasfpn: Learning latency-aware pyramid architecture for object detection on mobile devices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Boyu Chen, Peixia Li, Chuming Li, Baopu Li, Lei Bai, Chen Lin, Ming Sun, Junjie Yan, and Wanli Ouyang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Glit: Neural architecture search for global and local image transformer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 41 White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 12–21, 2021a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Hanlin Chen, Ming Lin, Xiuyu Sun, and Hao Li.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' NAS-bench-zero: A large scale dataset for understanding zero-shot neural architecture search, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' URL https://openreview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' net/forum?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='id=hP-SILoczR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Liang-Chieh Chen, Maxwell Collins, Yukun Zhu, George Papandreou, Barret Zoph, Florian Schroff, Hartwig Adam, and Jon Shlens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Searching for efficient multi-scale architectures for dense image prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the Annual Conference on Neural Informa- tion Processing Systems (NeurIPS), 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Minghao Chen, Houwen Peng, Jianlong Fu, and Haibin Ling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' One-shot neural ensem- ble architecture search by diversity-guided search space shrinking.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 16525–16534, 2021b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Minghao Chen, Houwen Peng, Jianlong Fu, and Haibin Ling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Autoformer: Searching trans- formers for visual recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 12270–12280, 2021c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Minghao Chen, Kan Wu, Bolin Ni, Houwen Peng, Bei Liu, Jianlong Fu, Hongyang Chao, and Haibin Ling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Searching the search space of vision transformer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), 34, 2021d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Tianqi Chen, Ian J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Goodfellow, and Jonathon Shlens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Net2net: Accelerating learning via knowledge transfer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the International Conference on Learning Repre- sentations (ICLR), 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Wuyang Chen, Xinyu Gong, and Zhangyang Wang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Neural architecture search on imagenet in four gpu hours: A theoretically inspired perspective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Proceedings of the International Conference on Learning Representations (ICLR), 2021e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' arXiv preprint arXiv:2102.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='11535.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Xiangning Chen and Cho-Jui Hsieh.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Stabilizing differentiable architecture search via perturbation-based regularization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the International Conference on Machine Learning (ICML), pages 1554–1565.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' PMLR, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Xiangning Chen, Ruochen Wang, Minhao Cheng, Xiaocheng Tang, and Cho-Jui Hsieh.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Dr- nas: Dirichlet neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2021f.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Xin Chen, Lingxi Xie, Jun Wu, and Qi Tian.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Progressive differentiable architecture search: Bridging the depth gap between search and evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1294–1303, 2019a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Yukang Chen, Tong Yang, Xiangyu Zhang, Gaofeng Meng, Xinyu Xiao, and Jian Sun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Detnas: Backbone search for object detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), 2019b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 42 Neural Architecture Search: Insights from 1000 Papers Krishna Teja Chitty-Venkata, Murali Emani, Venkatram Vishwanath, and Arun K Somani.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Neural architecture search for transformers: A survey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' IEEE Access, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Jan K Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, and Yoshua Ben- gio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Attention-based models for speech recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), 28, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Aristeidis Chrostoforidis, George Kyriakides, and Konstantinos Margaritis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' A novel evolutionary algorithm for hierarchical neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' arXiv preprint arXiv:2107.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='08484, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Xiangxiang Chu, Tianbao Zhou, Bo Zhang, and Jixiang Li.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Fair darts: Eliminating unfair advantages in differentiable architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In European conference on computer vision, pages 465–480.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Springer, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Xiangxiang Chu, Xiaoxing Wang, Bo Zhang, Shun Lu, Xiaolin Wei, and Junchi Yan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Darts- : robustly stepping out of performance collapse without indicators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Proceedings of the International Conference on Learning Representations (ICLR), 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' arXiv preprint arXiv:2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='01027.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Kevin Clark, Minh-Thang Luong, Quoc V Le, and Christopher D Manning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Electra: Pre-training text encoders as discriminators rather than generators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Proceedings of the International Conference on Learning Representations (ICLR), 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' arXiv preprint arXiv:2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='10555.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Binaryconnect: Training deep neural networks with binary weights during propagations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Advances in neural in- formation processing systems, 28, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Dennis D Cox and Susan John.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' A statistical method for global optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In [Proceedings] 1992 IEEE International Conference on Systems, Man, and Cybernetics, pages 1241– 1246.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' IEEE, 1992.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Xiaoliang Dai, Alvin Wan, Peizhao Zhang, Bichen Wu, Zijian He, Zhen Wei, Kan Chen, Yuandong Tian, Matthew Yu, Peter Vajda, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Fbnetv3: Joint architecture-recipe search using predictor pretraining.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the IEEE/CVF Conference on Com- puter Vision and Pattern Recognition (CVPR), pages 16276–16285, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Tri Dao, Nimit Sohoni, Albert Gu, Matthew Eichhorn, Amit Blonder, Megan Leszczynski, Atri Rudra, and Christopher R´e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Kaleidoscope: An efficient, learnable representation for all structured linear maps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Bert: Pre-training of deep bidirectional transformers for language understanding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of NAACL- HLT, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Mingyu Ding, Xiaochen Lian, Linjie Yang, Peng Wang, Xiaojie Jin, Zhiwu Lu, and Ping Luo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Hr-nas: Searching efficient high-resolution neural architectures with lightweight 43 White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter transformers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2982–2992, 2021a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Yuhui Ding, Quanming Yao, Huan Zhao, and Tong Zhang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Diffmg: Differentiable meta graph search for heterogeneous graph neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pages 279–288, 2021b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Tobias Domhan, Jost Tobias Springenberg, and Frank Hutter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Speeding up automatic hyperparameter optimization of deep neural networks by extrapolation of learning curves.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In The International Joint Conference on Artificial Intelligence (IJCAI), 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Xuanyi Dong and Yi Yang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Searching for a robust neural architecture in four gpu hours.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Xuanyi Dong and Yi Yang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Nas-bench-201: Extending the scope of reproducible neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the International Conference on Learning Repre- sentations (ICLR), 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Xuanyi Dong, Mingxing Tan, Adams Wei Yu, Daiyi Peng, Bogdan Gabrys, and Quoc V Le.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Autohas: Efficient hyperparameter and architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' arXiv preprint arXiv:2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='03656, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Xuanyi Dong, David Jacob Kedziora, Katarzyna Musial, and Bogdan Gabrys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Automated deep learning: Neural architecture search is not the end.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' arXiv preprint arXiv:2112.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='09245, 2021a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Xuanyi Dong, Lu Liu, Katarzyna Musial, and Bogdan Gabrys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Nats-bench: Benchmarking nas algorithms for architecture topology and size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' An image is worth 16x16 words: Transformers for image recognition at scale.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Proceedings of the International Conference on Learning Representations (ICLR), 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' arXiv preprint arXiv:2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='11929.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Sivan Doveh and Raja Giryes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Degas: differentiable efficient generator search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Neural Computing and Applications, 33(24):17173–17184, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Xianzhi Du, Tsung-Yi Lin, Pengchong Jin, Golnaz Ghiasi, Mingxing Tan, Yin Cui, Quoc V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Le, and Xiaodan Song.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Spinenet: Learning scale-permuted backbone for recognition and localization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Yawen Duan, Xin Chen, Hang Xu, Zewei Chen, Xiaodan Liang, Tong Zhang, and Zhen- guo Li.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Transnas-bench-101: Improving transferability and generalizability of cross-task neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 5251–5260, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 44 Neural Architecture Search: Insights from 1000 Papers Lukasz Dudziak, Thomas Chau, Mohamed Abdelfattah, Royson Lee, Hyeji Kim, and Nicholas Lane.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Brp-nas: Prediction-based nas using gcns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the An- nual Conference on Neural Information Processing Systems (NeurIPS), 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Thomas Elsken, Jan-Hendrik Metzen, and Frank Hutter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Simple and efficient architecture search for convolutional neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' arXiv preprint arXiv:1711.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='04528, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Thomas Elsken, Jan Hendrik Metzen, and Frank Hutter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Efficient multi-objective neural ar- chitecture search via lamarckian evolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2019a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Thomas Elsken, Jan Hendrik Metzen, and Frank Hutter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Neural architecture search: A survey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In JMLR, 2019b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Thomas Elsken, Benedikt Staffler, Jan Hendrik Metzen, and Frank Hutter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Meta-learning of neural architectures for few-shot learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In CVPR, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Thomas Elsken, Arber Zela, Jan Hendrik Metzen, Benedikt Staffler, Thomas Brox, Abhi- nav Valada, and Frank Hutter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Neural architecture search for dense prediction tasks in computer vision, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Nick Erickson, Jonas Mueller, Alexander Shirkov, Hang Zhang, Pedro Larroy, Mu Li, and Alexander Smola.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Autogluon-tabular: Robust and accurate automl for structured data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' arXiv preprint arXiv:2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='06505, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Stefan Falkner, Aaron Klein, and Frank Hutter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Bohb: Robust and efficient hyperparameter optimization at scale.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the International Conference on Machine Learning (ICML), 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Jiemin Fang, Yuzhu Sun, Kangjian Peng, Qian Zhang, Yuan Li, Wenyu Liu, and Xing- gang Wang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Fast neural network adaptation via parameter remapping and architec- ture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Feurer, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Klein, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Eggensperger, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Springenberg, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Blum, and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Hutter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Efficient and robust automated machine learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), pages 2962–2970, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Matthias Feurer and Frank Hutter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Hyperparameter optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Hutter et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2019), pages 3–38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Chelsea Finn, Pieter Abbeel, and Sergey Levine.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Model-agnostic meta-learning for fast adaptation of deep networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the International Conference on Machine Learning (ICML), 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Dario Floreano, Peter D¨urr, and Claudio Mattiussi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Neuroevolution: from architectures to learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Evolutionary intelligence, 1(1):47–62, 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Peter I Frazier.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' A tutorial on bayesian optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' stat, 1050:8, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 45 White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter Yonggan Fu, Wuyang Chen, Haotao Wang, Haoran Li, Yingyan Lin, and Zhangyang Wang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Autogan-distiller: searching to compress generative adversarial networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the International Conference on Machine Learning (ICML), pages 3292–3303, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Saya Fujino, Naoki Mori, and Keinosuke Matsumoto.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Deep convolutional networks for human sketches by means of the evolutionary deep learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In 2017 Joint 17th World Congress of International Fuzzy Systems Association and 9th International Conference on Soft Computing and Intelligent Systems (IFSA-SCIS), pages 1–5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' IEEE, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Vayangi Vishmi Vishara Ganepola and Torin Wirasingha.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Automating generative adversar- ial networks using neural architecture search: A review.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pages 577–582.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' IEEE, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Chen Gao, Yunpeng Chen, Si Liu, Zhenxiong Tan, and Shuicheng Yan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Adversarialnas: Ad- versarial neural architecture search for gans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 5680–5689, 2020a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Yang Gao, Hong Yang, Peng Zhang, Chuan Zhou, and Yue Hu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Graph neural architec- ture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In The International Joint Conference on Artificial Intelligence (IJCAI), volume 20, pages 1403–1409, 2020b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Roman Garnett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Bayesian Optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Cambridge University Press, 2023.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' to appear.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' John S Garofolo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Timit acoustic phonetic continuous speech corpus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Linguistic Data Con- sortium, 1993, 1993.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Golnaz Ghiasi, Tsung-Yi Lin, and Quoc V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Le.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Nas-fpn: Learning scalable feature pyramid architecture for object detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Spencer Gibb, Hung Manh La, and Sushil Louis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' A genetic algorithm for convolutional network structure optimization for concrete crack detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In 2018 IEEE Congress on Evolutionary Computation (CEC), pages 1–8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' IEEE, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Girshick, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Donahue, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Darrell, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Malik.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Rich feature hierarchies for accurate object detection and semantic segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In 2014 IEEE Conference on Computer Vision and Pattern Recognition, pages 580–587, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' David E Goldberg and Kalyanmoy Deb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' A comparative analysis of selection schemes used in genetic algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Foundations of genetic algorithms, volume 1, pages 69–93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Elsevier, 1991.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Chengyue Gong, Dilin Wang, Meng Li, Xinlei Chen, Zhicheng Yan, Yuandong Tian, Vikas Chandra, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Nasvit: Neural architecture search for efficient vision transformers with gradient conflict aware supernet training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In International Conference on Learning Rep- resentations, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Xinyu Gong, Shiyu Chang, Yifan Jiang, and Zhangyang Wang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Autogan: Neural archi- tecture search for generative adversarial networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 3224–3234, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 46 Neural Architecture Search: Insights from 1000 Papers Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Generative adversarial nets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), 27, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Li Guilin, Zhang Xing, Wang Zitong, Li Zhenguo, and Zhang Tong.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Stacnas: Towards stable and consistent optimization for differentiable neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Openreview submission https://openreview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='net/forum?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='id=rygpAnEKDH, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C Courville.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Improved training of wasserstein gans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Proceedings of the Annual Confer- ence on Neural Information Processing Systems (NeurIPS), 30, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Jianyuan Guo, Kai Han, Yunhe Wang, Chao Zhang, Zhaohui Yang, Han Wu, Xinghao Chen, and Chang Xu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Hit-detector: Hierarchical trinity architecture search for object detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Zichao Guo, Xiangyu Zhang, Haoyuan Mu, Wen Heng, Zechun Liu, Yichen Wei, and Jian Sun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Single path one-shot neural architecture search with uniform sampling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In European Conference on Computer Vision, pages 544–560.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Springer, 2020b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' David Ha, Andrew Dai, and Quoc V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Le.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Hypernetworks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Awni Hannun, Carl Case, Jared Casper, Bryan Catanzaro, Greg Diamos, Erich Elsen, Ryan Prenger, Sanjeev Satheesh, Shubho Sengupta, Adam Coates, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Deep speech: Scaling up end-to-end speech recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' arXiv preprint arXiv:1412.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='5567, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' He, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Zhang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Ren, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Sun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Spatial pyramid pooling in deep convolutional networks for visual recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' IEEE Transactions on Pattern Analysis and Machine Intelligence, 37(9):1904–1916, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Deep residual learning for image recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Deep residual learning for image recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Xin He, Kaiyong Zhao, and Xiaowen Chu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Automl: A survey of the state-of-the-art.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Knowledge-Based Systems, 212:106622, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Philipp Hennig and Christian J Schuler.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Entropy search for information-efficient global optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Journal of Machine Learning Research, 13(Jun):1809–1837, 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Jos´e Miguel Hern´andez-Lobato, Matthew W Hoffman, and Zoubin Ghahramani.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Predictive entropy search for efficient global optimization of black-box functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), pages 918–926, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 47 White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Gans trained by a two time-scale update rule converge to a local nash equilib- rium.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), 30, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Sepp Hochreiter and J¨urgen Schmidhuber.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Long short-term memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Neural computation, 9(8):1735–1780, 1997.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Sepp Hochreiter, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Steven Younger, and Peter R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Conwell.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Learning to learn using gra- dient descent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Georg Dorffner, Horst Bischof, and Kurt Hornik, editors, Artificial Neural Networks — ICANN 2001, pages 87–94, Berlin, Heidelberg, 2001.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Springer Berlin Heidelberg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Noah Hollmann, Samuel M¨uller, Katharina Eggensperger, and Frank Hutter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Tabpfn: A transformer that solves small tabular classification problems in a second.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' arXiv preprint arXiv:2207.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='01848, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Hospedales, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Antoniou, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Micaelli, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Storkey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Meta-learning in neural networks: A survey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, To- bias Weyand, Marco Andreetto, and Hartwig Adam.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Mobilenets: Efficient convolutional neural networks for mobile vision applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' arXiv preprint arXiv:1704.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='04861, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Hanzhang Hu, John Langford, Rich Caruana, Saurajit Mukherjee, Eric Horvitz, and De- badeepta Dey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Efficient forward architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the Annual Con- ference on Neural Information Processing Systems (NeurIPS), 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Shou-Yong Hu, Sirui Xie, Hehui Zheng, Chunxiao Liu, Jianping Shi, Xunying Liu, and Dahua Lin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Dsnas: Direct neural architecture search without parameter retraining.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 12081–12089, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Gao Huang, Zhuang Liu, Laurens van der Maaten, and Kilian Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Weinberger.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Densely connected convolutional networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Frank Hutter, Holger H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Hoos, and Kevin Leyton-Brown.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Sequential model-based optimiza- tion for general algorithm configuration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the 5th International Confer- ence on Learning and Intelligent Optimization, LION’05, page 507–523, Berlin, Heidel- berg, 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Springer-Verlag.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' ISBN 9783642255656.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='1007/978-3-642-25566-3 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='1007/978-3-642-25566-3_40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Frank Hutter, Lars Kotthoff, and Joaquin Vanschoren, editors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Automated Machine Learn- ing: Methods, Systems, Challenges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Springer, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Carl Hvarfner, Frank Hutter, and Luigi Nardi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Joint entropy search for maximally-informed bayesian optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 48 Neural Architecture Search: Insights from 1000 Papers Sergio Izquierdo, Julia Guerrero-Viu, Sven Hauns, Guilherme Miotto, Simon Schrodi, Andr´e Biedenkapp, Thomas Elsken, Difan Deng, Marius Lindauer, and Frank Hutter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Bag of baselines for multi-objective joint neural architecture search and hyperparameter opti- mization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In 8th ICML Workshop on Automated Machine Learning (AutoML), 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Arthur Jacot, Franck Gabriel, and Cl´ement Hongler.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Neural tangent kernel: Convergence and generalization in neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), 31, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Kevin Jamieson and Ameet Talwalkar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Non-stochastic best arm identification and hyper- parameter optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS), 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Mojan Javaheripi, Shital Shah, Subhabrata Mukherjee, Tomasz Lukasz Religa, Caio Ce- sar Teodoro Mendes, Gustavo Henrique de Rosa, Sebastien Bubeck, Farinaz Koushanfar, and Debadeepta Dey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Litetransformersearch: Training-free on-device search for efficient autoregressive language models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the Annual Conference on Neural In- formation Processing Systems (NeurIPS), 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Shengli Jiang and Prasanna Balaprakash.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Graph neural network architecture search for molecular property prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In 2020 IEEE International Conference on Big Data (Big Data), pages 1346–1353.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' IEEE, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Haifeng Jin, Qingquan Song, and Xia Hu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Auto-keras: An efficient neural architecture search system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2019a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Haifeng Jin, Qingquan Song, and Xia Hu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Auto-keras: An efficient neural architecture search system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 1946–1956.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' ACM, 2019b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Donald R Jones, Matthias Schonlau, and William J Welch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Efficient global optimization of expensive black-box functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Journal of Global optimization, 13(4):455–492, 1998.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Arlind Kadra, Marius Lindauer, Frank Hutter, and Josif Grabocka.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Regularization is all you need: Simple neural nets can excel on tabular data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' arXiv preprint arXiv:2106.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='11189, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Kirthevasan Kandasamy, Gautam Dasarathy, Jeff Schneider, and Barnab´as P´oczos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Multi- fidelity Bayesian optimisation with continuous approximations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the International Conference on Machine Learning (ICML), 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Kirthevasan Kandasamy, Willie Neiswanger, Jeff Schneider, Barnabas Poczos, and Eric P Xing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Neural architecture search with bayesian optimisation and optimal transport.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' David Jacob Kedziora, Katarzyna Musial, and Bogdan Gabrys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Autonoml: Towards an integrated framework for autonomous machine learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' arXiv preprint arXiv:2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='12600, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 49 White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter Hiroaki Kitano.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Designing neural networks using genetic algorithms with graph generation system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Complex systems, 4(4):461–476, 1990.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Jyrki Kivinen and Manfred K Warmuth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Exponentiated gradient versus gradient descent for linear predictors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' information and computation, 132, 1997.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Aaron Klein, Stefan Falkner, Jost Tobias Springenberg, and Frank Hutter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Learning curve prediction with bayesian neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Aaron Klein, Louis Tiao, Thibaut Lienart, Cedric Archambeau, and Matthias Seeger.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Model-based asynchronous hyperparameter and neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' arXiv preprint arXiv:2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='10865, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Nikita Klyuchnikov, Ilya Trofimov, Ekaterina Artemova, Mikhail Salnikov, Maxim Fedorov, Alexander Filippov, and Evgeny Burnaev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Nas-bench-nlp: neural architecture search benchmark for natural language processing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' IEEE Access, 10:45736–45747, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Masayuki Kobayashi and Tomoharu Nagao.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' A multi-objective architecture search for gen- erative adversarial networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the 2020 Genetic and Evolutionary Com- putation Conference Companion, pages 133–134, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Bernard Koch, Emily Denton, Alex Hanna, and Jacob G Foster.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Reduced, reused and recycled: The life of a dataset in machine learning research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' arXiv preprint arXiv:2112.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='01716.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Arjun Krishnakumar, Colin White, Arber Zela, Renbo Tu, Mahmoud Safari, and Frank Hutter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Nas-bench-suite-zero: Accelerating research on zero cost proxies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), Datasets and Benchmarks Track, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Imagenet classification with deep convolutional neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the Annual Conference on Neural In- formation Processing Systems (NeurIPS), 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' David Krueger, Chin-Wei Huang, Riashat Islam, Ryan Turner, Alexandre Lacoste, and Aaron Courville.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Bayesian hypernetworks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' arXiv preprint arXiv:1710.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='04759, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Deepika Kumari and Kamaljit Kaur.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' A survey on stereo matching techniques for 3d vision in image processing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Int.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Eng.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Manuf, 4:40–49, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Kevin Alexander Laube, Maximus Mutschler, and Andreas Zell.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' What to expect of hardware metric predictors in NAS, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' URL https://openreview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='net/forum?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='id=2DJn3E7lXu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Yann LeCun, Patrick Haffner, L´eon Bottou, and Yoshua Bengio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Object recognition with gradient-based learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Shape, contour and grouping in computer vision, 1999.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Hayeon Lee, Eunyoung Hyung, and Sung Ju Hwang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Rapid neural architecture search by learning to generate graphs from datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 50 Neural Architecture Search: Insights from 1000 Papers Juho Lee, Yoonho Lee, Jungtaek Kim, Adam Kosiorek, Seungjin Choi, and Yee Whye Teh.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Set transformer: A framework for attention-based permutation-invariant neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the International Conference on Machine Learning (ICML), 2019a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Namhoon Lee, Thalaiyasingam Ajanthan, and Philip Torr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Snip: Single-shot network prun- ing based on connection sensitivity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2019b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Changlin Li, Tao Tang, Guangrun Wang, Jiefeng Peng, Bing Wang, Xiaodan Liang, and Xiaojun Chang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Bossnas: Exploring hybrid cnn-transformers with block-wisely self- supervised neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 12281–12291, 2021a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Chaojian Li, Zhongzhi Yu, Yonggan Fu, Yongan Zhang, Yang Zhao, Haoran You, Qixuan Yu, Yue Wang, Cong Hao, and Yingyan Lin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' {HW}-{nas}-bench: Hardware-aware neu- ral architecture search benchmark.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2021b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Guohao Li, Guocheng Qian, Itzel C Delgadillo, Matthias Muller, Ali Thabet, and Bernard Ghanem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Sgas: Sequential greedy architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 1620–1630, 2020a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Jian Li, Yong Liu, Jiankun Liu, and Weiping Wang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Neural architecture optimization with graph vae.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' arXiv preprint arXiv:2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='10310, 2020b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Liam Li and Ameet Talwalkar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Random search and reproducibility for neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Uncertainty in Artificial Intelligence (UAI), 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Liam Li, Kevin Jamieson, Afshin Rostamizadeh, Ekaterina Gonina, Moritz Hardt, Benjamin Recht, and Ameet Talwalkar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' A system for massively parallel hyperparameter tuning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the Conference on Machine Learning Systems (MLSys), 2020c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Liam Li, Mikhail Khodak, Maria-Florina Balcan, and Ameet Talwalkar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Geometry-aware gradient algorithms for neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2021c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Lisha Li, Kevin Jamieson, Giulia DeSalvo, Afshin Rostamizadeh, and Ameet Talwalkar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Hyperband: A novel bandit-based approach to hyperparameter optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In JMLR, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Yuhong Li, Cong Hao, Pan Li, Jinjun Xiong, and Deming Chen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Generic neural architec- ture search via regression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), 34:20476–20490, 2021d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Dongze Lian, Yin Zheng, Yintao Xu, Yanxiong Lu, Leyu Lin, Peilin Zhao, Junzhou Huang, and Shenghua Gao.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Towards fast adaptation of neural architectures with meta learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 51 White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter Hanwen Liang, Shifeng Zhang, Jiacheng Sun, Xingqiu He, Weiran Huang, Kechen Zhuang, and Zhenguo Li.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Darts+: Improved differentiable architecture search with early stopping.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' arXiv preprint arXiv:1909.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='06035, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Ming Lin, Pichao Wang, Zhenhong Sun, Hesen Chen, Xiuyu Sun, Qi Qian, Hao Li, and Rong Jin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Zen-nas: A zero-shot nas for high-performance image recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 347–356, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Marius Lindauer and Frank Hutter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Best practices for scientific research on neural archi- tecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In JMLR, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Marius Lindauer, Katharina Eggensperger, Matthias Feurer, Andr´e Biedenkapp, Difan Deng, Carolin Benjamins, Tim Ruhkopf, Ren´e Sass, and Frank Hutter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Smac3: A versa- tile bayesian optimization package for hyperparameter optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Journal of Machine Learning Research, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Chenxi Liu, Barret Zoph, Maxim Neumann, Jonathon Shlens, Wei Hua, Li-Jia Li, Li Fei- Fei, Alan Yuille, Jonathan Huang, and Kevin Murphy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Progressive neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the European Conference on Computer Vision (ECCV), pages 19–34, 2018a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Chenxi Liu, Liang-Chieh Chen, Florian Schroff, Hartwig Adam, Wei Hua, Alan L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Yuille, and Li Fei-Fei.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Auto-deeplab: Hierarchical neural architecture search for semantic image segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Chenxi Liu, Liang-Chieh Chen, Florian Schroff, Hartwig Adam, Wei Hua, Alan L Yuille, and Li Fei-Fei.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Auto-deeplab: Hierarchical neural architecture search for semantic image segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Hanxiao Liu, Karen Simonyan, Oriol Vinyals, Chrisantha Fernando, and Koray Kavukcuoglu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Hierarchical representations for efficient architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2018b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Hanxiao Liu, Karen Simonyan, and Yiming Yang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Darts: Differentiable architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2019c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Roberta: A robustly optimized bert pretraining approach, 2019d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Yuqiao Liu, Yanan Sun, Bing Xue, Mengjie Zhang, Gary G Yen, and Kay Chen Tan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' A survey on evolutionary neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' IEEE Transactions on Neural Networks and Learning Systems, 2021a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Swin transformer: Hierarchical vision transformer using shifted windows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 52 Neural Architecture Search: Insights from 1000 Papers In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 10012–10022, 2021b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Mohammad Loni, Sima Sinaei, Ali Zoljodi, Masoud Daneshtalab, and Mikael Sj¨odin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Deep- maker: A multi-objective optimization framework for deep neural networks in embedded systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Microprocessors and Microsystems, 73:102989, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Zhichao Lu, Ian Whalen, Vishnu Boddeti, Yashesh Dhebar, Kalyanmoy Deb, Erik Good- man, and Wolfgang Banzhaf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Nsga-net: Neural architecture search using multi-objective genetic algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the Genetic and Evolutionary Computation Confer- ence (GECCO), 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Zhichao Lu, Kalyanmoy Deb, Erik Goodman, Wolfgang Banzhaf, and Vishnu Naresh Bod- deti.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Nsganetv2: Evolutionary multi-objective surrogate-assisted neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Computer Vision – ECCV 2020, pages 35–51, Cham, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Springer Inter- national Publishing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Jovita Lukasik, David Friede, Arber Zela, Frank Hutter, and Margret Keuper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Smooth variational graph embeddings for efficient neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In International Joint Conference on Neural Networks (IJCNN), 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Jovita Lukasik, Steffen Jung, and Margret Keuper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Learning where to look–generative nas is surprisingly efficient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In The European Conference on Computer Vision (ECCV), 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Jelena Luketina, Mathias Berglund, Klaus Greff, and Tapani Raiko.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Scalable gradient-based tuning of continuous regularization hyperparameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the International Conference on Machine Learning (ICML), pages 2952–2960, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Renqian Luo, Xu Tan, Rui Wang, Tao Qin, Enhong Chen, and Tie-Yan Liu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Semi-supervised neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the Annual Conference on Neural Informa- tion Processing Systems (NeurIPS), 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Sebastian Lutz, Konstantinos Amplianitis, and Aljoscha Smolic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Alphagan: Generative ad- versarial networks for natural image matting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In The British Machine Vision Conference (BMVC), 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Lizheng Ma, Jiaxu Cui, and Bo Yang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Deep neural architecture search with deep graph bayesian optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In 2019 IEEE/WIC/ACM International Conference on Web In- telligence (WI), pages 500–507.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' IEEE, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Matthew Mackay, Paul Vicol, Jonathan Lorraine, David Duvenaud, and Roger Grosse.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Self- tuning networks: Bilevel optimization of hyperparameters using structured best-response functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Neeratyoy Mallik and Noor Awad.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Dehb: Evolutionary hyperband for scalable, robust and efficient hyperparameter optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In The International Joint Conference on Artificial Intelligence (IJCAI), 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 53 White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter Abhinav Mehrotra, Alberto Gil C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Ramos, Sourav Bhattacharya, �Lukasz Dudziak, Ravichander Vipperla, Thomas Chau, Mohamed S Abdelfattah, Samin Ishtiaq, and Nicholas Donald Lane.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Nas-bench-asr: Reproducible neural architecture search for speech recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Yash Mehta, Colin White, Arber Zela, Arjun Krishnakumar, Guri Zabergja, Shakiba Mora- dian, Mahmoud Safari, Kaicheng Yu, and Frank Hutter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Nas-bench-suite: Nas evaluation is (now) surprisingly easy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Joe Mellor, Jack Turner, Amos Storkey, and Elliot J Crowley.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Neural architecture search without training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the International Conference on Machine Learning (ICML), pages 7588–7598.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' PMLR, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' H Mendoza, A Klein, M Feurer, J Springenberg, and F Hutter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Towards automatically- tuned neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In ICML 2016 AutoML Workshop, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Luke Metz, Ben Poole, David Pfau, and Jascha Sohl-Dickstein.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Unrolled generative adver- sarial networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the International Conference on Learning Representa- tions (ICLR), 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Microsoft.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Neural Network Intelligence, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' URL https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='com/microsoft/nni.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Distributed representations of words and phrases and their compositionality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Geoffrey F Miller, Peter M Todd, and Shailesh U Hegde.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Designing neural networks using genetic algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In ICGA, volume 89, pages 379–384, 1989.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Rusu, Joel Veness, Marc G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Bellemare, Alex Graves, Martin Riedmiller, Andreas K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Ku- maran, Daan Wierstra, Shane Legg, and Demis Hassabis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Human-level control through deep reinforcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Nature, 518(7540):529–533, Feb 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Jonas Moˇckus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' On bayesian methods for seeking the extremum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Optimization Techniques IFIP Technical Conference, pages 400–404.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Springer, 1975.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Pablo Mu˜noz, Nikolay Lyalyushkin, Yash Akhauri, Anastasia Senina, Alexander Kozlov, and Nilesh Jain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Enabling NAS with automated super-network generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' AAAI 1st International Workshop on Practical Deep Learning in the Wild, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Byunggook Na, Jisoo Mok, Hyeokjun Choe, and Sungroh Yoon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Accelerating neural archi- tecture search via proxy data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' The International Joint Conference on Artificial Intelli- gence (IJCAI), 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 54 Neural Architecture Search: Insights from 1000 Papers Ashwin Raaghav Narayanan, Arber Zela, Tonmoy Saikia, Thomas Brox, and Frank Hutter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Multi-headed neural ensemble search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Workshop on Uncertainty and Robustness in Deep Learning (UDL@ICML‘21), 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Aviv Navon, Aviv Shamsian, Gal Chechik, and Ethan Fetaya.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Learning the pareto front with hypernetworks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the International Conference on Learning Repre- sentations (ICLR), 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Niv Nayman, Asaf Noy, Tal Ridnik, Itamar Friedman, Rong Jin, and Lihi Zelnik.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Xnas: Neural architecture search with expert advice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), 32, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Renato Negrinho and Geoff Gordon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Deeparchitect: Automatically designing and training deep architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' stat, 1050:28, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Vladimir Nekrasov, Hao Chen, Chunhua Shen, and Ian Reid.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Fast neural architecture search of compact semantic segmentation models via auxiliary cells.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Vu Nguyen, Tam Le, Makoto Yamada, and Michael A Osborne.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Optimal transport kernels for sequential and parallel neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the International Conference on Machine Learning (ICML), pages 8084–8095.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' PMLR, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Alex Nichol, Joshua Achiam, and John Schulman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' On first-order meta-learning algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' arXiv preprint, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Xuefei Ning, Yin Zheng, Tianchen Zhao, Yu Wang, and Huazhong Yang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' A generic graph- based neural architecture encoding scheme for predictor-based nas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In European Confer- ence on Computer Vision, pages 189–204.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Springer, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Xuefei Ning, Changcheng Tang, Wenshuo Li, Zixuan Zhou, Shuang Liang, Huazhong Yang, and Yu Wang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Evaluating efficient performance estimators of neural architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Pro- ceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), 34, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Matheus Nunes and Gisele L Pappa.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Neural architecture search in graph neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Brazilian Conference on Intelligent Systems, pages 302–317.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Springer, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Olson, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Bartley, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Urbanowicz, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Moore.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Evaluation of a Tree-based Pipeline Optimization Tool for Automating Data Science.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Friedrich, editor, Proceedings of the Genetic and Evolutionary Computation Conference (GECCO’16), pages 485–492.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' ACM, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' T Den Ottelander, Arkadiy Dushatskiy, Marco Virgolin, and Peter AN Bosman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Local search is a remarkably strong baseline for neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In International Conference on Evolutionary Multi-Criterion Optimization, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Daiyi Peng, Xuanyi Dong, Esteban Real, Mingxing Tan, Yifeng Lu, Gabriel Bender, Hanx- iao Liu, Adam Kraft, Chen Liang, and Quoc Le.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Pyglove: Symbolic programming for 55 White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter automated machine learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the Annual Conference on Neural Infor- mation Processing Systems (NeurIPS), 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Hieu Pham, Melody Guan, Barret Zoph, Quoc Le, and Jeff Dean.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Efficient neural archi- tecture search via parameters sharing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the International Conference on Machine Learning (ICML), 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Alo¨ıs Pourchot, Alexis Ducarouge, and Olivier Sigaud.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' To share or not to share: A com- prehensive appraisal of weight-sharing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' arXiv preprint arXiv:2002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='04289, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Vishak Prasad, Colin White, Paarth Jain, Sibasis Nayak, Rishabh Iyer, and Ganesh Ramakrishnan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Speeding up NAS with adaptive subset selection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' arXiv preprint arXiv:2211.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='01454, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Language models are unsupervised multitask learners.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' OpenAI blog, 1(8):9, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, and Piotr Dollar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Designing network design spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Exploring the limits of transfer learning with a unified text-to-text transformer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Mach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Learn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Res.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 21(140), 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Inioluwa Deborah Raji, Emily M Bender, Amandalynne Paullada, Emily Denton, and Alex Hanna.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Ai and the everything in the whole wide world benchmark.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), Datasets and Benchmarks Track, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Aditya Rawal, Joel Lehman, Felipe Petroski Such, Jeff Clune, and Kenneth O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Stanley.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Synthetic petri dish: A novel surrogate model for rapid architecture search, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Le, and Alexey Kurakin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Large-scale evolution of image classifiers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the International Conference on Machine Learning (ICML), 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Esteban Real, Alok Aggarwal, Yanping Huang, and Quoc V Le.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Regularized evolution for image classifier architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Esteban Real, Chen Liang, David So, and Quoc Le.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Automl-zero: Evolving machine learning algorithms from scratch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the International Conference on Machine Learning (ICML), pages 8007–8019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' PMLR, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Pengzhen Ren, Yun Xiao, Xiaojun Chang, Po-Yao Huang, Zhihui Li, Xiaojiang Chen, and Xin Wang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' A comprehensive survey of neural architecture search: Challenges and solutions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' arXiv preprint arXiv:2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='02903, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 56 Neural Architecture Search: Insights from 1000 Papers Nicholas Roberts, Mikhail Khodak, Tri Dao, Liam Li, Christopher R´e, and Ameet Tal- walkar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Rethinking neural operations for diverse tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Olaf Ronneberger, Philipp Fischer, and Thomas Brox.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' U-net: Convolutional networks for biomedical image segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Nassir Navab, Joachim Hornegger, William M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Wells, and Alejandro F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Frangi, editors, Medical Image Computing and Computer-Assisted In- tervention – MICCAI 2015, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Binxin Ru, Clare Lyle, Lisa Schut, Mark van der Wilk, and Yarin Gal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Revisiting the train loss: an efficient performance estimator for neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' stat, 1050:8, 2020a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Binxin Ru, Xingchen Wan, Xiaowen Dong, and Michael Osborne.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Neural architecture search using bayesian optimisation with weisfeiler-lehman kernel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Robin Ru, Pedro Esperan¸ca, and Fabio Maria Carlucci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Neural architecture generator optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), 33, 2020b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Michael Ruchte, Arber Zela, Julien Siems, Josif Grabocka, and Frank Hutter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Naslib: a modular and flexible neural architecture search library, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Tonmoy Saikia, Yassine Marrakchi, Arber Zela, Frank Hutter, and Thomas Brox.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Autodisp- net: Improving disparity estimation with automl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In The IEEE International Conference on Computer Vision (ICCV), October 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Improved techniques for training gans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), 29, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Mobilenetv2: Inverted residuals and linear bottlenecks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the IEEE con- ference on computer vision and pattern recognition, pages 4510–4520, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Santanu Santra, Jun-Wei Hsieh, and Chi-Fang Lin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Gradient descent effects on differential neural architecture search: A survey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' IEEE Access, 9:89602–89618, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Shreyas Saxena and Jakob Verbeek.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Convolutional neural fabrics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Jurgen Schmidhuber.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Evolutionary principles in self-referential learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' on learning how to learn: The meta-meta-meta.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='-hook.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Master’s thesis, Technische Universitaet Muenchen, Germany, 1987.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' J¨urgen Schmidhuber.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Learning to control fast-weight memories: An alternative to dynamic recurrent networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Neural Computation, 4(1):131–139, 1992.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' J¨urgen Schmidhuber.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' A ‘self-referential’weight matrix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In International conference on arti- ficial neural networks, pages 446–450.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Springer, 1993.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 57 White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter Lennart Schneider, Florian Pfisterer, Martin Binder, and Bernd Bischl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Mutation is all you need.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In 8th ICML Workshop on Automated Machine Learning (AutoML), 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Christoph Schorn, Thomas Elsken, Sebastian Vogel, Armin Runge, Andre Guntoro, and Gerd Ascheid.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Automated design of error-resilient and hardware-efficient deep neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Springer Neural Computing and Applications, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Proximal policy optimization algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' ArXiv, abs/1707.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='06347, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Christian Sciuto, Kaicheng Yu, Martin Jaggi, Claudiu Musat, and Mathieu Salzmann.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Eval- uating the search phase of neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Gresa Shala, Thomas Elsken, Frank Hutter, and Josif Grabocka.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Transfer NAS with meta- learned bayesian surrogates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Sixth Workshop on Meta-Learning at the Conference on Neural Information Processing Systems, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Albert Shaw, Daniel Hunter, Forrest Landola, and Sammy Sidhu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Squeezenas: Fast neural architecture search for faster semantic segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In The IEEE International Confer- ence on Computer Vision (ICCV) Workshops, Oct 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Junhong Shen, Mikhail Khodak, and Ameet Talwalkar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Efficient architecture search for diverse tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Yu Shen, Yang Li, Jian Zheng, Wentao Zhang, Peng Yao, Jixiang Li, Sen Yang, Ji Liu, and Cui Bin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Proxybo: Accelerating neural architecture search via bayesian optimization with zero-cost proxies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' arXiv preprint arXiv:2110.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='10423, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Han Shi, Renjie Pi, Hang Xu, Zhenguo Li, James Kwok, and Tong Zhang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Bridging the gap between sample-based and one-shot neural architecture search with bonas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Jae-hun Shim, Kyeongbo Kong, and Suk-Ju Kang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Core-set sampling for efficient neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' arXiv preprint arXiv:2107.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='06869, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Yao Shu, Shaofeng Cai, Zhongxiang Dai, Beng Chin Ooi, and Bryan Kian Hsiang Low.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Nasi: Label-and data-agnostic neural architecture search at initialization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Yao Shu, Yizhou Chen, Zhongxiang Dai, and Bryan Low.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Neural ensemble search via bayesian sampling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Uncertainty in Artificial Intelligence (UAI), 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Julien Siems, Lucas Zimmer, Arber Zela, Jovita Lukasik, Margret Keuper, and Frank Hut- ter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Nas-bench-301 and the case for surrogate benchmarks for neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' arXiv preprint arXiv:2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='09777, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 58 Neural Architecture Search: Insights from 1000 Papers David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Mastering the game of go with deep neural networks and tree search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Nature, 529(7587):484–489, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Master- ing the game of go without human knowledge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Nature, 550(7676):354–359, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' David So, Quoc Le, and Chen Liang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' The evolved transformer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the International Conference on Machine Learning (ICML).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' PMLR, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' David R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' So, Wojciech Ma´nke, Hanxiao Liu, Zihang Dai, Noam Shazeer, and Quoc V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Le.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Primer: Searching for efficient transformers for language modeling, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Gowthami Somepalli, Micah Goldblum, Avi Schwarzschild, C Bayan Bruss, and Tom Gold- stein.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Saint: Improved neural networks for tabular data via row attention and contrastive pre-training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' arXiv preprint arXiv:2106.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='01342, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Dehua Song, Chang Xu, Xu Jia, Yiyi Chen, Chunjing Xu, and Yunhe Wang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Efficient resid- ual dense block search for image super-resolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), volume 34, pages 12007–12014, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Jost Tobias Springenberg, Aaron Klein, Stefan Falkner, and Frank Hutter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Bayesian opti- mization with robust bayesian neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), pages 4134–4142, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Niranjan Srinivas, Andreas Krause, Sham Kakade, and Matthias Seeger.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Gaussian process optimization in the bandit setting: No regret and experimental design.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the 27th International Conference on Machine Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Omnipress, 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Kenneth O Stanley and Risto Miikkulainen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Evolving neural networks through augmenting topologies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Evolutionary computation, 10(2):99–127, 2002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Kenneth O Stanley, David B D’Ambrosio, and Jason Gauci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' A hypercube-based encoding for evolving large-scale neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Artificial life, 15(2):185–212, 2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Rainer Storn and Kenneth Price.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Differential evolution – a simple and efficient heuristic for global optimization over continuous spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' of Global Optimization, 11(4):341–359, dec 1997.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Xiu Su, Shan You, Jiyang Xie, Mingkai Zheng, Fei Wang, Chen Qian, Changshui Zhang, Xiaogang Wang, and Chang Xu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Vitas: Vision transformer architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' arXiv preprint arXiv:2106.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='13700, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Felipe Petroski Such, Aditya Rawal, Joel Lehman, Kenneth Stanley, and Jeffrey Clune.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Generative teaching networks: Accelerating neural architecture search by learning to generate synthetic training data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the International Conference on Ma- chine Learning (ICML), pages 9206–9216.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' PMLR, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 59 White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter Masanori Suganuma, Shinichi Shirakawa, and Tomoharu Nagao.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' A genetic programming approach to designing convolutional neural network architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the genetic and evolutionary computation conference, pages 497–504, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Masanori Suganuma, Mete Ozay, and Takayuki Okatani.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Exploiting the potential of stan- dard convolutional autoencoders for image restoration by evolutionary search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Pro- ceedings of the International Conference on Machine Learning (ICML), pages 4771–4780.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' PMLR, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Rhea Sukthanker, Samuel Dooley, John P Dickerson, Colin White, Frank Hutter, and Micah Goldblum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' On the importance of architectures and hyperparameters for fairness in face recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' arXiv preprint arXiv:2210.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='09943, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Yanan Sun, Bing Xue, Mengjie Zhang, and Gary G Yen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Evolving deep convolutional neural networks for image classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' IEEE Transactions on Evolutionary Computation, 24 (2):394–407, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Yanan Sun, Bing Xue, Mengjie Zhang, Gary G Yen, and Jiancheng Lv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Automatically designing cnn architectures using the genetic algorithm for image classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' IEEE transactions on cybernetics, 50(9):3840–3854, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Kevin Swersky, David Duvenaud, Jasper Snoek, Frank Hutter, and Michael A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Osborne.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Raiders of the lost architecture: Kernels for bayesian optimization in conditional param- eter spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' arXiv preprint arXiv:1409.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='4011, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, and Alexander A Alemi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Inception-v4, inception-resnet and the impact of residual connections on learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Thirty-first AAAI conference on artificial intelligence, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Mingxing Tan and Quoc Le.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Efficientnet: Rethinking model scaling for convolutional neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the International Conference on Machine Learning (ICML), pages 6105–6114.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' PMLR, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Mingxing Tan, Bo Chen, Ruoming Pang, Vijay Vasudevan, Mark Sandler, Andrew Howard, and Quoc V Le.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Mnasnet: Platform-aware neural architecture search for mobile.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Hidenori Tanaka, Daniel Kunin, Daniel L Yamins, and Surya Ganguli.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Pruning neural networks without any data by iteratively conserving synaptic flow.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), 33:6377–6389, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Manoel Tenorio and Wei-Tsih Lee.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Self organizing neural networks for the identification problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Proceedings of the Annual Conference on Neural Information Processing Sys- tems (NeurIPS), 1, 1988.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Lucas Theis, Iryna Korshunova, Alykhan Tejani, and Ferenc Husz´ar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Faster gaze prediction with dense networks and fisher pruning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' arXiv preprint arXiv:1801.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='05787, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 60 Neural Architecture Search: Insights from 1000 Papers C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Thornton, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Hutter, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Hoos, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Leyton-Brown.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Auto-WEKA: combined selection and hyperparameter optimization of classification algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Dhillon, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Koren, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Ghani, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Senator, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Bradley, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Parekh, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' He, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Grossman, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Uthurusamy, editors, The 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD’13), pages 847–855, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Sebastian Thrun and Lorien Pratt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Learning to learn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Springer Science+Business Media, 1998.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Yuan Tian, Qin Wang, Zhiwu Huang, Wen Li, Dengxin Dai, Minghao Yang, Jun Wang, and Olga Fink.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Off-policy reinforcement learning for efficient and effective gan architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In European Conference on Computer Vision, pages 175–192.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Springer, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, and Herv´e J´egou.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Training data-efficient image transformers & distillation through at- tention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In International Conference on Machine Learning, pages 10347–10357.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' PMLR, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Renbo Tu, Nicholas Roberts, Mikhail Khodak, Junhong Shen, Frederic Sala, and Ameet Talwalkar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' NAS-bench-360: Benchmarking neural architecture search on diverse tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), Datasets and Benchmarks Track, 2022a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Renbo Tu, Nicholas Roberts, Vishak Prasad, Sibasis Nayak, Paarth Jain, Frederic Sala, Ganesh Ramakrishnan, Ameet Talwalkar, Willie Neiswanger, and Colin White.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Automl for climate change: A call to action.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' arXiv preprint arXiv:2210.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='03324, 2022b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Joaquin Vanschoren.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Meta-learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Hutter et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' (2019), pages 39–68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, �Lukasz Kaiser, and Illia Polosukhin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Attention is all you need.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), pages 5998–6008, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Xingchen Wan, Binxin Ru, Pedro M Esparan¸ca, and Fabio Maria Carlucci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Approximate neural architecture search via operation distribution learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 2377–2386, 2022a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Xingchen Wan, Binxin Ru, Pedro M Esperan¸ca, and Zhenguo Li.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' On redundancy and diversity in cell-based neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2022b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Chaoqi Wang, Guodong Zhang, and Roger Grosse.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Picking winning tickets before training by preserving gradient flow.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2020a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Hanchao Wang and Jun Huan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Agan: Towards automated design of generative adversarial networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' arXiv preprint arXiv:1906.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='11080, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 61 White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter Linnan Wang, Yiyang Zhao, Yuu Jinnai, and Rodrigo Fonseca.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Alphax: exploring neural architectures with deep neural networks and monte carlo tree search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' arXiv preprint arXiv:1805.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='07440, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Linnan Wang, Yiyang Zhao, Yuu Jinnai, Yuandong Tian, and Rodrigo Fonseca.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Neural architecture search using deep neural networks and monte carlo tree search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, number 06, pages 9983– 9991, 2020b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Ning Wang, Yang Gao, Hao Chen, Peng Wang, Zhi Tian, Chunhua Shen, and Yanning Zhang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Nas-fcos: Fast neural architecture search for object detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Ruochen Wang, Minhao Cheng, Xiangning Chen, Xiaocheng Tang, and Cho-Jui Hsieh.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Rethinking architecture selection in differentiable nas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Zi Wang and Stefanie Jegelka.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Max-value entropy search for efficient bayesian optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the International Conference on Machine Learning (ICML), pages 3627– 3635.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' PMLR, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Tao Wei, Changhu Wang, Yong Rui, and Chang Wen Chen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Network morphism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Pro- ceedings of the International Conference on Machine Learning (ICML), 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Lilian Weng.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Neural architecture search, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' URL https://lilianweng.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='io/ posts/2020-08-06-nas/.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Colin White, Willie Neiswanger, Sam Nolen, and Yash Savani.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' A study on encodings for neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the Annual Conference on Neural Informa- tion Processing Systems (NeurIPS), 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Colin White, Willie Neiswanger, and Yash Savani.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Bananas: Bayesian optimization with neural architectures for neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2021a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Colin White, Sam Nolen, and Yash Savani.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Exploring the loss landscape in neural archi- tecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Uncertainty in Artificial Intelligence (UAI), pages 654–664.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' PMLR, 2021b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Colin White, Arber Zela, Binxin Ru, Yang Liu, and Frank Hutter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' How powerful are performance predictors in neural architecture search?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), 2021c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Colin White, Mikhail Khodak, Renbo Tu, Shital Shah, S´ebastien Bubeck, and Dey De- badeepta.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' A deeper look at zero-cost proxies for lightweight nas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In ICLR Blog Track, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' URL http://0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='0:4000/2021/12/01/zero-cost-proxies/.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Ronald J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Williams.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Simple statistical gradient-following algorithms for connectionist rein- forcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Mach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Learn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=', 8(3–4):229–256, may 1992.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 62 Neural Architecture Search: Insights from 1000 Papers Martin Wistuba.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Finding competitive network architectures within a day using uct.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Proceed- ings of the 5th IEEE International Conference on Data Science and Advanced Analytics, pages 263-272, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' arXiv preprint arXiv:1712.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='07420.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Martin Wistuba.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Deep learning architecture search by neuro-cell-based evolution with function-preserving mutations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Michele Berlingerio, Francesco Bonchi, Thomas G¨artner, Neil Hurley, and Georgiana Ifrim, editors, Machine Learning and Knowledge Discovery in Databases, pages 243–258, Cham, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Springer International Publishing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Martin Wistuba, Ambrish Rawat, and Tejaswini Pedapati.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' A survey on neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' arXiv preprint arXiv:1905.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='01392, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Catherine Wong, Neil Houlsby, Yifeng Lu, and Andrea Gesmundo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Transfer learning with neural automl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Bengio, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Wallach, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Larochelle, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Grauman, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Cesa-Bianchi, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Garnett, editors, Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Bichen Wu, Xiaoliang Dai, Peizhao Zhang, Yanghan Wang, Fei Sun, Yiming Wu, Yuandong Tian, Peter Vajda, Yangqing Jia, and Kurt Keutzer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Fbnet: Hardware-aware efficient convnet design via differentiable neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Bichen Wu, Xiaoliang Dai, Peizhao Zhang, Yanghan Wang, Fei Sun, Yiming Wu, Yuandong Tian, Peter Vajda, Yangqing Jia, and Kurt Keutzer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Fbnet: Hardware-aware efficient con- vnet design via differentiable neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10734–10742, 2019b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Yan Wu, Zhiwu Huang, Suryansh Kumar, Rhea Sanjay Sukthanker, Radu Timofte, and Luc Van Gool.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Trilevel neural architecture search for efficient single image super-resolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' arXiv preprint arXiv:2101.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='06658, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Lichuan Xiang, �Lukasz Dudziak, Mohamed S Abdelfattah, Thomas Chau, Nicholas D Lane, and Hongkai Wen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Zero-cost proxies meet differentiable architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' arXiv preprint arXiv:2106.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='06799, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Lingxi Xie and Alan Yuille.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Genetic cnn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the IEEE international confer- ence on computer vision, pages 1379–1388, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Lingxi Xie, Xin Chen, Kaifeng Bi, Longhui Wei, Yuhui Xu, Lanfei Wang, Zhengsu Chen, An Xiao, Jianlong Chang, Xiaopeng Zhang, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Weight-sharing neural architecture search: A battle to shrink the optimization gap.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' ACM Computing Surveys (CSUR), 54 (9):1–37, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Sirui Xie, Hehui Zheng, Chunxiao Liu, and Liang Lin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Snas: stochastic neural architec- ture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 63 White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter Hang Xu, Lewei Yao, Wei Zhang, Xiaodan Liang, and Zhenguo Li.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Auto-fpn: Automatic network architecture adaptation for object detection beyond classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In The IEEE International Conference on Computer Vision (ICCV), October 2019a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Jin Xu, Xu Tan, Renqian Luo, Kaitao Song, Jian Li, Tao Qin, and Tie-Yan Liu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Nas- bert.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, Aug 2021a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='1145/3447548.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='3467262.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' URL http://dx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 1145/3447548.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='3467262.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Jin Xu, Xu Tan, Kaitao Song, Renqian Luo, Yichong Leng, Tao Qin, Tie-Yan Liu, and Jian Li.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Analyzing and mitigating interference in neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the International Conference on Machine Learning (ICML).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' PMLR, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Jingjing Xu, Liang Zhao, Junyang Lin, Rundong Gao, Xu Sun, and Hongxia Yang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Knas: green neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In International Conference on Machine Learning, pages 11613–11625.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' PMLR, 2021b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Yuhui Xu, Lingxi Xie, Xiaopeng Zhang, Xin Chen, Guo-Jun Qi, Qi Tian, and Hongkai Xiong.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Pc-darts: Partial channel connections for memory-efficient architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2019b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Shen Yan, Yu Zheng, Wei Ao, Xiao Zeng, and Mi Zhang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Does unsupervised architecture representation learning help neural architecture search?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Shen Yan, Kaiqiang Song, Fei Liu, and Mi Zhang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Cate: Computation-aware neural archi- tecture encoding with transformers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the International Conference on Machine Learning (ICML), 2021a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Shen Yan, Colin White, Yash Savani, and Frank Hutter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Nas-bench-x11 and the power of learning curves.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the Annual Conference on Neural Information Process- ing Systems (NeurIPS), 2021b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Antoine Yang, Pedro M Esperan¸ca, and Fabio M Carlucci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Nas evaluation is frustrat- ingly hard.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Lewei Yao, Hang Xu, Wei Zhang, Xiaodan Liang, and Zhenguo Li.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Sm-nas: Structural- to-modular neural architecture search for object detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Quanming Yao, Mengshuo Wang, Yuqiang Chen, Wenyuan Dai, Yu-Feng Li, Wei-Wei Tu, Qiang Yang, and Yang Yu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Taking human out of learning applications: A survey on automated machine learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' arXiv preprint arXiv:1810.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='13306, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Yichun Yin, Cheng Chen, Lifeng Shang, Xin Jiang, Xiao Chen, and Qun Liu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Autotinybert: Automatic hyper-parameter optimization for efficient pre-trained language models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In ACL, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 64 Neural Architecture Search: Insights from 1000 Papers Chris Ying, Aaron Klein, Esteban Real, Eric Christiansen, Kevin Murphy, and Frank Hut- ter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Nas-bench-101: Towards reproducible neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the International Conference on Machine Learning (ICML), 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Kaicheng Yu, Rene Ranftl, and Mathieu Salzmann.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' How to train your super-net: An analysis of training heuristics in weight-sharing nas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' arXiv preprint arXiv:2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='04276, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Tong Yu and Hong Zhu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Hyper-parameter optimization: A review of algorithms and appli- cations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' arXiv preprint arXiv:2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='05689, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Sergey Zagoruyko and Nikos Komodakis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Wide residual networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In British Machine Vision Conference, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Manzil Zaheer, Satwik Kottur, Siamak Ravanbakhsh, Barnabas Poczos, Russ R Salakhut- dinov, and Alexander J Smola.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Deep sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Sheheryar Zaidi, Arber Zela, Thomas Elsken, Chris C Holmes, Frank Hutter, and Yee Teh.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Neural ensemble search for uncertainty estimation and dataset shift.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), 34:7898–7911, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Amir R Zamir, Alexander Sax, William Shen, Leonidas J Guibas, Jitendra Malik, and Silvio Savarese.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Taskonomy: Disentangling task transfer learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3712–3722, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Arber Zela, Aaron Klein, Stefan Falkner, and Frank Hutter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Towards automated deep learning: Efficient joint neural architecture and hyperparameter search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' arXiv preprint arXiv:1807.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='06906, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Arber Zela, Thomas Elsken, Tonmoy Saikia, Yassine Marrakchi, Thomas Brox, and Frank Hutter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Understanding and robustifying differentiable architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2020a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Arber Zela, Julien Siems, and Frank Hutter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Nas-bench-1shot1: Benchmarking and dissect- ing one-shot neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2020b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Chris Zhang, Mengye Ren, and Raquel Urtasun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Graph hypernetworks for neural architec- ture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Haokui Zhang, Ying Li, Hao Chen, and Chunhua Shen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Memory-efficient hierarchical neural architecture search for image denoising.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 3657–3666, 2020a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Miao Zhang, Steven W Su, Shirui Pan, Xiaojun Chang, Ehsan M Abbasnejad, and Reza Haffari.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' idarts: Differentiable architecture search with stochastic implicit gradients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In International Conference on Machine Learning, pages 12557–12566.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' PMLR, 2021a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 65 White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter Muhan Zhang, Shali Jiang, Zhicheng Cui, Roman Garnett, and Yixin Chen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' D-vae: A vari- ational autoencoder for directed acyclic graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Yuge Zhang, Zejun Lin, Junyang Jiang, Quanlu Zhang, Yujing Wang, Hui Xue, Chen Zhang, and Yaming Yang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Deeper insights into weight sharing in neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' arXiv preprint arXiv:2001.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='01431, 2020b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Ziwei Zhang, Xin Wang, and Wenwu Zhu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Automated machine learning on graphs: A survey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' IJCAI Survey Track, 2021b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' arXiv preprint arXiv:2103.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='00742.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Huan Zhao, Lanning Wei, and Quanming Yao.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Simplifying architecture search for graph neural network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' arXiv preprint arXiv:2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='11652, 2020a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Yiren Zhao, Duo Wang, Xitong Gao, Robert Mullins, Pietro Lio, and Mateja Jamnik.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Prob- abilistic dual network architecture search on graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' arXiv preprint arXiv:2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='09676, 2020b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Yiyang Zhao, Linnan Wang, Kevin Yang, Tianjun Zhang, Tian Guo, and Yuandong Tian.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Multi-objective optimization by learning space partition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In International Conference on Learning Representations, 2021a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Yuekai Zhao, Li Dong, Yelong Shen, Zhihua Zhang, Furu Wei, and Weizhu Chen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Memory- efficient differentiable transformer architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Findings of the Association for Computational Linguistics, 2021b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Dongzhan Zhou, Xinchi Zhou, Wenwei Zhang, Chen Change Loy, Shuai Yi, Xuesen Zhang, and Wanli Ouyang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Econas: Finding proxies for economical neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 11396–11404, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Kaichen Zhou, Lanqing Hong, Shoukang Hu, Fengwei Zhou, Binxin Ru, Jiashi Feng, and Zhenguo Li.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Dha: End-to-end joint optimization of data augmentation policy, hyper- parameter and architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' arXiv preprint arXiv:2109.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='05765, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Kaixiong Zhou, Qingquan Song, Xiao Huang, and Xia Hu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Auto-gnn: Neural architecture search of graph neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' arXiv preprint arXiv:1909.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content='03184, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Lucas Zimmer, Marius Lindauer, and Frank Hutter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Auto-pytorch tabular: Multi-fidelity metalearning for efficient and robust autodl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Barret Zoph and Quoc V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Le.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Neural architecture search with reinforcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Barret Zoph, Vijay Vasudevan, Jonathon Shlens, and Quoc V Le.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' Learning transferable architectures for scalable image recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' In CVPR, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
+page_content=' 66' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'}
diff --git a/atE1T4oBgHgl3EQfxAUb/vector_store/index.faiss b/atE1T4oBgHgl3EQfxAUb/vector_store/index.faiss
new file mode 100644
index 0000000000000000000000000000000000000000..7022bca9157cf3bcba0fbc4ac9617cc0c3ed52de
--- /dev/null
+++ b/atE1T4oBgHgl3EQfxAUb/vector_store/index.faiss
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e1be8ba1c5a4dbe787391e06d2f2a8032a41ff9859e914201e7c4142efbbc377
+size 4522029
diff --git a/b9AyT4oBgHgl3EQf-PqC/content/tmp_files/2301.00889v1.pdf.txt b/b9AyT4oBgHgl3EQf-PqC/content/tmp_files/2301.00889v1.pdf.txt
new file mode 100644
index 0000000000000000000000000000000000000000..e59878d29b3559a618aec579dbc2e3bfc174472e
--- /dev/null
+++ b/b9AyT4oBgHgl3EQf-PqC/content/tmp_files/2301.00889v1.pdf.txt
@@ -0,0 +1,1885 @@
+An empirical process framework for covariate
+balance in causal inference
+Efr´en Cruz Cort´es
+Michigan Institute for Data Science
+Center for the Study of Complex Systems
+University of Michigan
+encc@umich.edu
+Kevin Josey
+Department of Biostatistics
+Harvard T.H. Chan School of Public Health
+kjosey@hsph.harvard.edu
+Fan Yang
+Department of Biostatistics and Informatics
+Colorado School of Public Health
+fan.3.yang@cuanschutz.edu
+Debashis Ghosh
+Department of Biostatistics and Informatics
+Colorado School of Public Health
+debashis.ghosh@cuanschutz.edu
+Abstract
+We propose a new perspective for the evaluation of matching procedures by considering
+the complexity of the function class they belong to. Under this perspective we provide
+theoretical guarantees on post-matching covariate balance through a finite sample con-
+centration inequality. We apply this framework to coarsened exact matching as well as
+matching using the propensity score and suggest how to apply it to other algorithms.
+Simulation studies are used to evaluate the procedures.
+keywords: Causal effects, empirical distribution function, entropy metric, superpopulation, tail
+inequality, Vapnik-Chervonenkis dimension.
+1
+Introduction
+Causal inference is a central goal for outcomes and policy research, particularly in the medical field.
+Among the many topics in this broad field of study are methods for evaluating treatment effects
+with non-randomized data. There is an abundance of observational data in nearly every discipline of
+science. However, bias induced by confounding is inherent in observational studies. In this context,
+the researcher must account for every potential confounder in some way before they can establish
+causality. While randomization remains the gold-standard for inference, as there is no confounding
+by definition, randomizing individuals into treatment groups is often cost prohibitive and sometimes
+unethical for certain study designs.
+Under the potential outcomes framework (Neyman, 1923; Rubin, 1974), Rosenbaum and Rubin
+(1983) were able to describe how the propensity score plays a key role in causal effect estimation and
+inference with observational data. The propensity score is defined as the probability of receiving a
+treatment given a set of measured covariates. Under strong ignorabiligy assumption, the propensity
+score removes bias attributable to confounding due to its property as a balancing score (Rosenbaum
+and Rubin, 1983). With this result in mind, numerous methods for causal effect estimation were
+1
+arXiv:2301.00889v1 [math.ST] 2 Jan 2023
+
+subsequently developed around the propensity score, with covariate balance serving as the primary
+objective (e.g., Imai and Ratkovic (2014); Zubizarreta (2015); Chan et al. (2016)). However, the
+results presented by Rosenbaum and Rubin (1983) about the propensity score are derived in an
+asymptotic setting. This means that estimates of the propensity score may not adequately balance
+the covariate distribution in finite settings.
+Therefore, many methods are resolved by iterating
+between fitting a model for the propensity score and evaluating balance diagnostics on the propensity
+score adjusted covariates before estimating the treatment effect of interest.
+Some methods for
+evaluating balance diagnostics have been proposed by Ho et al. (2007) and Sekhon (2008). The
+propensity score literature has mostly diverged into two overlapping yet distinct domains - one
+that uses the propensity score to derive balancing weights (Hainmueller, 2012; Imai and Ratkovic,
+2014; Chan et al., 2016) and the other that uses a balancing score, such as the propensity score, to
+construct a matched cohort.
+Recently, a multivariate matching approach using coarsened values of the observed covariates was
+developed by Iacus et al. (2011). They refer to their algorithm as coarsened exact matching. One
+of the primary aims of their method was to eliminate the iterative step of re-matching participants
+until an acceptable amount of balance is achieved. Coarsened exact matching is quite simple in
+nature and proceeds using the following high-level heuristic:
+1. For each confounding variable, coarsen it into a certain number of categories;
+2. Create strata based on the possible combinations of the coarsened values;
+3. Compute a causal effect by comparing the outcomes of the treatment groups within the strata
+and adjusting for the stratum effect appropriately.
+The theoretical justification provided by Iacus et al. (2011) for coarsened exact matching is a
+concept they term monotonic imbalance. They show that bounding the distance between confounders
+to be small leads to matching procedures that are more flexible than procedures based on the
+equal percent bias reduction theory developed by Rubin and collaborators (Rubin, 1976; Rubin and
+Thomas, 1992; Rubin et al., 2006). One of the main advantages of coarsened exact matching is that
+it becomes amenable to large-scale database querying approaches to peforming causal inference: see
+Salimi and Suciu (2016) as well as Wang et al. (2017).
+However, fewer technical results exist for matching estimators than for other approaches, such as
+inverse probability weighting estimators. Abadie and Imbens (2006) have studied the large-sample
+asymptotics of matching estimators and found that in general, matching-based estimators of average
+causal effect did not have the usual n1/2 convergence. The intuition is that the matching algorithm
+introduces a bias into causal effect estimation that did not vanish asymptotically. This bias term also
+increased with the number of confounders. Bias-corrected estimators have been proposed by Abadie
+and Imbens (2011). Abadie and Imbens (2016) performed a theoretical study of the asymptotic
+behavior of average causal effect estimators that match using the estimated propensity score.
+Conceptually, achieving covariate balance is a multivariate concept. If we let L(Z | T = 0) and
+L(Z | T = 1) denote the probability laws for the confounders conditional on treatment status then,
+ideally, as in the case of perfect randomization, these distributions are equal in some sense. We refer
+to this sense of equality as covariate balance.
+Most covariate balance methods do not take the joint distribution of confounders into account but
+rather seek to match moments of the marginal distributions for the confounders. For example, Imai
+and Ratkovic (2014) proposed matching the first and second moments of covariates in their algorithm.
+Practically, one-dimensional diagnostics such as mean comparisons of confounders between treatment
+groups or Kolmogorov-Smirnov statistics are used to evaluate balance. Wang and Zubizarreta (2019)
+have argued that due to the inherent complexity in attempting to achieve multivariate balance, one
+should instead strive to achieve approximate balance between confounders.
+In this paper, we propose a new theoretical approach to evaluating and understanding covariate
+balance. We introduce a distance metric to assess how close two multivariate distributions are from
+2
+
+each other and define covariate balance as having zero distance. This metric is defined in terms of
+the function family the matching procedure belongs to. Subsequent assessment of balance relies on
+understanding the behavior of the function classes in question. We demonstrate the following in the
+current paper:
+1. The use of function classes fits naturally with the use of probability metrics (Zolotarev, 1984)
+for comparing probability laws and in this instance, multivariate distributions for confounders
+conditional on treatment.
+2. Results from empirical process theory (Van Der Vaart and Wellner, 1996; Kosorok, 2007)
+can subsequently be used to study the behavior of function classes and to make probabilistic
+statements on the rates of convergence of matching procedures under ideal balance.
+3. Ideal balance provides a new theoretical out-of-sample justification for the methodology of
+Iacus et al. (2011) and can be used for the evaluation of other algorithmic strategies.
+Based on the framework, one can view the techniques in this paper as being akin to developing a scal-
+able strategy for achieving covariate balance that has relatively low complexity from the viewpoint
+described in Section 3.
+2
+Background and Preliminaries
+2.1
+Data Structures and Causal Estimands
+Let the data be represented as (Yi, Ti, Zi), i = 1, . . . , n, a random sample from the triple (Y, T, Z),
+where Y denotes the response of interest, T denotes the treatment group, and Z is a p-dimensional
+vector of covariates. We assume that T takes values in {0, 1}.
+We now briefly review the potential outcomes framework (Rubin, 1974; Holland, 1986).
+Let
+{Y (0), Y (1)} denote the potential outcomes for all n subjects, and the observed response be related
+to the potential outcomes by
+Y = (1 − T)Y (0) + TY (1).
+In the potential outcomes framework, causal effects are defined as within-individual contrasts based
+on the potential outcomes. One popularly used estimand is the average causal effect, defined as
+ACE = 1
+n
+n
+�
+i=1
+(Yi(1) − Yi(0)) .
+Many assumptions are needed for performing valid causal inference.
+These include the con-
+sistency assumption, the treatment positivity assumption, and the strongly ignorable treatment
+assumption (Rosenbaum and Rubin, 1983), defined as
+T ⊥ {Y (0), Y (1)} | Z.
+(2.1)
+Assumption (2.1) means that treatment assignment is conditionally independent of the set of po-
+tential outcomes given the covariates. Treatment positivity refers to 1 > P(T = 1 | Z) > 0 for
+all values of Z. Thus, the intuition is that any individual can potentially receive either treatment.
+Finally, the consistency assumption ensures that the observed outcome and the potential outcome
+under the observed treatment coincide.
+As described recently by Imbens and Rubin (2015), causal inference proceeds by modelling the
+assignment mechanism using observed covariates. A quantity that naturally arises from this mod-
+elling is the propensity score (Rosenbaum and Rubin, 1983), the probability of receiving treatment
+given confounders. The propensity score is defined as
+e(Z) = P(T = 1 | Z).
+3
+
+Given the treatment ignorability assumption in (2.1), it also follows by Theorem 3 of Rosenbaum
+and Rubin (1983) that treatment is strongly ignorable given the propensity score, i.e.
+T ⊥ {Y (0), Y (1)} | e(Z).
+Based on these assumptions and definitions, we can formulate causal inference using the following
+approach: (a) define an appropriate causal estimand; (b) formulate a propensity score model; (c)
+check for covariate balance; (d) if (c) holds, estimate the causal estimand by conditioning on the
+propensity scores. We note that steps (b) and (c) tend to be iterative in practice. While the results
+in this paper pertain to propensity-matched analyses, they apply to more general matching strategies
+as well.
+2.2
+Previous results on covariate balance
+In terms of covariate balance, a major class of theoretical results come from work on equal percent
+bias reduction procedures (Rubin and Thomas, 1992, 1996). Equal percent bias reduction means
+that a certain type of covariate matching will reduce bias in all dimensions of Z by the same amount.
+Define a matching method to be affinely invariant if the matching procedure is invariant to
+affine transformations of the covariates. If Z given T is assumed to have a so-called elliptically
+symmetric distribution, then Theorem 3.1.
+and Corollaries 3.1.
+and 3.2 of Rubin and Thomas
+(1992) apply so that any affinely invariant matching method will be equal percent bias reducing.
+Examples of elliptically symmetric distributions include the multivariate normal and t distributions.
+While elliptical symmetry of the confounders given treatment group is a restrictive assumption, this
+was relaxed in more recent work by Rubin et al. (2006). There, they assumed that the conditional
+distribution of Z given T is a discriminant mixture of elliptically symmetric distributions. Rubin
+et al. (2006) prove that a generalization of equal percent bias reducing holds for this setup as well.
+Thus, for equal percent bias reducing methods, we have a guarantee that attempting to increase
+balance in one variable will not lead to distortions in balance for other variables. However, the
+assumptions needed for equal percent bias reducing to hold seem restrictive in practice. Iacus et al.
+(2011) took another approach by focusing on in-sample covariate discrepancies and requiring that
+the maximum discrepancy in sample means between treated and control subjects be bounded above
+by a constant. They generalize this to arbitrary functions of the data, which they term imbalance
+bounding and define monotonic imbalance bounding matching methods to be those in which the
+discrepancies between a monotonic function applied to a variable is bounded above by a confounder-
+specific term. Thus, one can be more stringent in the balance in variable without impacting the
+maximal imbalance across all confounders.
+There are many important implications of requiring the monotonic imbalance bounding property.
+First, many methods of confounder adjustment, such as nearest-neighbor or caliper matching as
+defined in Cochran and Rubin (1973), are not monotonic imbalance bounding because they fix the
+number of treated and control observations within strata, while monotonic imbalance bounding
+methods imply variable numbers of observations. By contrast, if the caliper matching procedure
+were to allow for different calipers for each confounder, then this would be monotonic imbalance
+bounding.
+Iacus et al. (2011) also show that a key goal in causal effect estimation is to reduce model
+dependence (Ho et al., 2007), meaning that there should not be extrapolation of potential outcomes
+to regions in the covariate space where there are no observations.
+Under some assumptions on
+the model for potential outcomes, they show that for monotonic imbalance bounding methods, the
+model dependence is upper bounded by terms involving an imbalance parameter. In addition, the
+estimation error for average causal effects using monotonic imbalance bounding matching methods
+can also be upper bounded by terms involving this parameter.
+As a concrete example of a new monotonic imbalance bounding method, Iacus et al. (2011)
+propose a coarsened exact matching algorithm for creating strata. It proceeds as follows:
+4
+
+1. For each variable Zj (j = 1, . . . , p), coarsen it into a function Cj(Zj) which takes on fewer
+values than the unique values of Zj;
+2. Perform exact matching between treated and control observations using the vector
+(C1(Z1), C2(Z2), . . . , Cp(Zp)) .
+This effectively creates strata S1, . . . , SJ based on the unique combinations of
+(C1(Z1), C2(Z2), . . . , Cp(Zp)) .
+3. Discard strata in which there are only observations with T = 0. For strata with only observa-
+tions from the T = 1 population, extrapolate the potential outcome Y (0) using the available
+controls or discard by restricting the causal effect of interest on the treated units for which
+causal effect can be identified without further modelling based assumptions. For strata with
+both treated and control observations, compare the outcome between the two populations.
+Iacus et al. (2011) have developed very easy-to-use software packages for implementing coarsened
+exact matching in R and Stata. They show that the coarsened exact matching approach satisfies
+the monotonic imbalance bounding property with respect to a variety of functionals of interest. In
+addition, they provide a very intuitive explanation for what coarsened exact matching attempts to
+mimic. While classical propensity score approaches attempt to mimic a randomized study, analyses
+using coarsened exact matching will mimic randomized block designs, where the blocks are by
+definition predictive of the potential outcomes. It is well-known that in this situation, randomized
+block designs will yield more efficient estimators (e.g., Box, Hunter and Hunter, 1978).
+The other approach that has become of recent interest has been to incorporate covariate balance
+as part of the causal effect estimation process. For example, Imai and Ratkovic (2014) propose using
+generalized methods of moments for causal effect estimation in which covariate balance is treated
+as a constraint in the procedure. Chan et al. (2016) propose the use of calibration estimators for
+causal effect estimation in which covariate balance constraints lead to a constrained Lagrangian
+dual optimization problem. For these approaches, the authors are able to develop consistency and
+asymptotic normality results for the causal effect estimators.
+As described in more detail in Section 3.1, we will be using an integral probability metric to
+assess covariate balance among the two populations. In Kallus (2020) a similar metric is used. They
+define such a metric as the target error to be minimized for obtaining optimal weighting coefficients
+when estimating the sample average treatment effect on the treated.
+While our approaches are
+complementary, there are several notable differences. First, in Kallus (2020), they use their metric
+to find weights that correspond to known matching methods. The functions involved in their metric
+represent the expected relationship between potential outcomes and covariates. In our case, we take
+any matching procedure and given the measure of match, bound it by the probability metric involving
+functions representing the matching procedure itself, and provide probability bounds to how good
+the matching is. In addition, in Kallus (2020), they assume a fixed population and therefore no
+randomness in covariate values, while our concern indeed focuses on the sample distribution of these
+covariates. The difference between these two approaches is further explained in Section 2.3.
+2.3
+Modes of inference and covariate balance
+In looking at the various proposals for accommodating covariate balance, it is useful to reconsider
+the ways in which one can perform causal inference. Imbens and Rubin (2015) have a nice overview
+on the distinction between finite-population and superpopulation modes for causal inference. The
+finite-population mode of causal inference treats the sampled units as the population of interest. The
+stochastic nature of the experiment is due solely to the treatment mechanism so that randomness
+occurs only with respect to the treatment assignments. If one adopts the finite-sample point of view
+5
+
+for causal inference, then one can use a randomization-based approach to performing inference for
+causal effects.
+By contrast, the superpopulation mode of inference considers two sources of variability. The
+first is due to the randomness in the treatment assignments, and the second is due to the fact that
+the sampling units are a random sample from a superpopulation.
+Thus, this approach posits a
+superpopulation from which the sampling units come from.
+Revisiting the previous work from 2.2, the equal percent bias reduction theory and the work of
+Iacus et al. (2011) posit results about covariate balance assuming a finite-population mode for causal
+inference. Thus, covariate balance results of these methods will involve subsampling and matching
+from the sampling units, and the balance occurs with respect to the matched sample. The concept
+of balance we introduce in the next section can accommodate both modes of inference.
+3
+Main Results
+3.1
+Ideal Balance
+In this section, we wish to study covariate balance from the viewpoint of comparing the distributions
+L(Z | T = 0) and L(Z | T = 1). To do so, we must determine how this comparison is done. We do
+this by first defining probability pseudometrics.
+Definition 3.1 (Pseudometric). Let A be the set of probability measures defined on a shared mea-
+surable space. A function m : A × A → [0, ∞) is a pseudometric on A if, for all µ, ν, λ ∈ A, the
+following conditions are satisfied:
+1. m(µ, µ) = 0.
+2. m(µ, ν) = m(ν, µ).
+3. m(µ, ν) ≤ m(µ, λ) + m(λ, ν).
+Note these properties almost make m a metric on A, but notably we do not assume that if the
+distance between two elements is zero, then the two elements are the same. For the purpose of this
+paper, we will abuse terminology and refer to pseudometrics as metrics.
+The class of metrics we will work with in this article is given by
+γF(µ, ν) = sup
+f∈F
+����
+�
+fdµ −
+�
+fdν
+���� ,
+(3.1)
+where F is a class of functions. In (3.1), γF(µ, ν) is referred to by Zolotarev (1984) as an example
+of a probability metric. In our notation, we drop the dependency of γF on F and write it as γ. We
+now define ideal balance as being based on (3.1).
+Definition 3.2 (Ideal Balance). Let µ and ν be distributions on the same probability space and m a
+pseudometric, then we say µ and ν satisfy Ideal Balance with respect to m if m(µ, ν) = 0.
+When µ and ν are the conditional distributions of the covariates given the treatment group,
+as in Section 2, ideal balance is a restriction on the population. If these are instead the empirical
+distributions of the data, ideal balance is a sample restriction. Matching methods, in a sense, intend
+to achieve ideal balance on the matched data for some m.
+Note that at this stage, we have only dealt with population distributional laws and have not
+described how to estimate or compute these quantities with real data. In practice, we would not
+expect ideal balance to hold in observational studies. However, it does serve as a useful benchmark
+through which we can study the behavior of various functional constraints.
+Here, the function
+spaces F in (3.1) play the role of the constraints; more complex function spaces correspond to more
+constraints on the joint distributions of Z|T = 1 and Z|T = 0.
+6
+
+3.2
+A Concentration Inequality Result
+Let F be a function space and ∥ · ∥ a norm. The covering number N(ϵ, F, ∥ · ∥) is the minimum
+number of ∥ · ∥-balls of radius ϵ needed to cover F, where a ball centered around f ∈ F is the
+set {g | ∥f − g∥ ≤ ϵ}.
+Intuitively, one can think of the covering number as a measure of the
+complexity of the function class F. For a measure µ the norm Lr(µ)-norm, for r ≥ 1, is defined
+as ∥f∥r
+Lr(µ) =
+�
+|f|rdµ. Throughout the paper, we will assume F is uniformly bounded. Note that
+if µ is any probability measure, and under uniform boundedness, we can endow F with the norm
+Lr(µ) without dropping any of its elements. Unless otherwise specified, we assume the range of the
+functions in F is [0, 1]. Finally, for a function class F, an envelope function of F is defined as any
+function h such that for all f in F, the inequality
+|f(x)| ≤ |h(x)|
+is satisfied for any x.
+Let {Zi}n
+i=1 be a sample where each Zi has distribution Q. We denote the empirical distribution
+by Qn. The F-indexed empirical process GQ
+n is defined as the map taking any f ∈ F to
+GQ
+n (f) = √n
+��
+fdQn −
+�
+fdQ
+�
+=
+1
+√n
+n
+�
+i=1
+�
+f(Zi) −
+�
+fdQ
+�
+.
+Theorem 3.3. Let Q0
+n0 and Q1
+n1 be two empirical distributions of observations sampled from Q0
+and Q1, respectively, and assume ideal balance holds for Q0 and Q1 with respect to γ. Let M be the
+collection of probability measures. If there exists constants C and K such that F satisfies
+sup
+µ∈M
+N(ϵ, F, ∥ · ∥Lr(µ)) ≤
+�K
+ϵ
+�C
+,
+for every 0 < ϵ < C, then
+Pr{γ(Q0
+n0, Q1
+n1) > δ} ≤
+� Dδ
+2
+√
+C
+�C �
+nC/2
+0
+exp(−n0δ2/2) + nC/2
+1
+exp(−n1δ2/2)
+�
+,
+(3.2)
+where D is a constant depending on K only.
+The proofs of Theorem 3.3 and subsequent results are found in the supplementary material.
+Throughout the paper, we will use Bn(δ, D, C) for the bound in Theorem 3.3, where the subscript
+n reminds us of the dependence on the sample size.
+Remark 3.4. We note that the bound in (3.2) is nonasymptotic and will hold for any sample size.
+Remark 3.5. In this framework, the function classes play an important role. Theorem 3.3 gives
+a bound in terms of the entropy number of the function class in question.
+In particular, low-
+complexity functions are favored using this approach. A key technical point is ensuring that the
+covering number condition in the theorem is satisfied. To do so, we will primarily use results from
+Vapnik-Chervonenkis theory (Chervonenkis and Vapnik, 1971) to determine appropriate covering
+numbers.
+In most cases the function classes of interest are not real-valued but vector-valued. The following
+straightforward results can be used to deal with these cases.
+Lemma 3.6. Let {Fi}d
+i=1 be a collection of real-valued function spaces and (P i, Qi) satisfy ideal
+balance under γFi for each 1 ≤ i ≤ d. Let (Pi, Qi) denote their respective empirical distributions
+with implicit sample size dependence. Then
+Pr
+� d
+�
+i=1
+γFi(Pi, Qi) > δ
+�
+≤
+d
+�
+i=1
+B(δ/d, Di, Ci).
+7
+
+Now, consider the collection {Fi}d
+i=1, where each Fi is a real-valued function space. Define F =
+{f = (f1, . . . , fd)T | fi ∈ Fi for all i}. Let πℓ be the ℓth coordinate projection, that is, for a finite
+dimensional vector x = (x1, . . . , xd), πℓ(x) = xℓ. Finally, define Fπ = {πℓ ◦ f | f ∈ F, 1 ≤ ℓ ≤ d}.
+Note the elements of Fπ are real-valued. The following lemma tells us we can either assume µ and
+ν satisfy ideal balance with respect to each of γFi, or that they satisfy ideal balance with respect to
+γFπ.
+Lemma 3.7. Let F, {Fi}d
+i=1, and Fπ be as above, and let µ and ν denote two probability measures.
+Then the following are equivalent:
+1. µ and ν satisfy ideal balance with respect to γFπ;
+2. µ and ν satisfy ideal balance with respect to each γFi, 1 ≤ i ≤ d.
+3. maxi γFi(ν, µ) = 0.
+The following corollary will be very useful:
+Corollary 3.8. Let F and Fπ be as above, and Fi = F∗ for all i. Assume F∗ has polynomial
+covering number. Let {X0
+j }n0
+j=1 ∼ Q0 and {X1
+j }n1
+j=1 ∼ Q1, where Q0 and Q1 satisfy ideal balance
+with respect to γFπ. Fix f ∗ ∈ F, then
+Pr
+�
+�
+�
+������
+1
+n0
+n0
+�
+j=1
+f ∗(X0
+j ) − 1
+n1
+n1
+�
+j=1
+f ∗(X1
+j )
+������
+ℓp
+> δ
+�
+�
+� ≤ dB(δ/d1/p, D∗, C∗),
+for finite p ≥ 1, and
+Pr
+�
+�
+������
+1
+n0
+n0
+�
+j=1
+f ∗(X0
+j ) − 1
+n1
+n1
+�
+j=1
+f ∗(X1
+j )
+������
+ℓ∞
+> δ
+�
+� ≤ dB(δ, D∗, C∗),
+where D∗, C∗ depend only on F∗.
+Definition 3.9 (Vapnik-Chervonenkis Dimension). The Vapnik-Chervonenkis dimension of a func-
+tion class F on an ambient set X is the cardinality of the largest subset shattered by F. A function
+class F shatters a set S ∈ X if for each possible 0 − 1 labeling of the elements of S there is at least
+one function f ∈ F that realizes such labeling.
+A key result we will use is an application of Theorem 2.6.7 of Van Der Vaart and Wellner (1996),
+which implies that if a function class G has finite Vapnik-Chervonenkis dimension v, then
+sup
+µ N(ϵ, G, L2(µ)) ≤
+�K
+ϵ
+�C∗
+,
+where C∗ = 2v − 2.
+4
+Examples
+4.1
+Balance on coarsened function classes
+Consider coarsened exact matching as described in Iacus et al. (2011).
+Let Z0 = {Z0
+i }n0
+i=1 and
+Z1 = {Z1
+j }n1
+j=1 be the control and treatment samples, respectively. In coarsened exact matching
+we create a partition of the sample space and match samples which are found in the same element
+8
+
+of the partition, and discard samples in subsets without samples from the opposite group. We are
+interested in the quantity
+∆ =
+1
+m0
+�
+i∈M0
+w0
+i Z0
+i − 1
+m1
+�
+j∈M1
+w1
+jZ1
+j ,
+where mℓ is the number of matched samples for the ℓth group, Mℓ is its index set, and {w0
+i , w1
+j}i∈M0,j∈M1
+are weights.
+In the supplementary material we describe how to express this matching procedure as a function
+f on the variables Z0
+i and Z1
+j . This allows us to express ∆ in terms of f. We further specify the
+function space F for which
+∥∆∥ ≤ γF(Q0
+n0, Q1
+n1)
+holds for an appropriate norm. Using the properties of F and provided the bound above, we can
+derive our results of interest:
+Pr(|∆k| ≥ δ) ≤ B(δ, D, C∗),
+for a constant C∗ and where ∆k is the kth component of ∆. Similarly,
+Pr(∥∆∥ℓp ≥ δ) ≤ dB(δ/d1/p, D, C∗)
+and
+Pr(∥∆∥ℓ∞ ≥ δ) ≤ dB(δ, D, C∗).
+4.2
+Covariate balance on the linear propensity score
+As discussed in Section 3, there has been a lot of work on developing matching results based on
+linear discriminant analysis. That is, we assume that P(Z | T = ℓ) follows N(µℓ, Σ). Under this
+model, the metric for consideration is the logit of the propensity score (see Stuart (2010)). In the
+supplementary material we show the distance |logit(e(Z)) − logit(e(Z′)| can be expressed in terms
+of the linear discriminant analysis hyperplance vector. Indeed, if p is the dimension of the covariates,
+we can create a function space F derived from hyperplanes and with Vapnik-Chervonenkis dimension
+p + 1 such that
+∆ =
+������
+1
+m0
+�
+i∈M0
+logit(e(Zi)) − 1
+m1
+�
+j∈M1
+logit(e(Zj))
+������
+≤ γF(Q0
+n0, Q1
+n1),
+allowing us, using Theorem 3.3, to determine the bound of interest:
+Pr{∆ > δ} ≤ B(δ, D, 2p).
+4.3
+Covariate balance using kernels
+Many authors (Hazlett, 2016; Wong and Chan, 2018; Zhu et al., 2018) have advocated for the use
+of kernel methods for matching and evaluating covariate balance. This corresponds to assuming
+that F in (3.1) represents a Reproducing Kernel Hilbert space. Further details about these function
+spaces can be found in the supplementary material.
+To apply Theorem 3.3 to the kernel setting, we will note there exists a version of linear discrimi-
+nant analysis from section 4.2 that can be extended to the reproducing Kernel Hilbert Space setting
+(Baudat and Anouar, 2000). Let H be a reproductive kernel Hilbert space and ∥ · ∥H the norm
+associated to it, then a natural metric to consider for a kernelized matching procedure would be
+∆H =
+������
+1
+m0
+�
+i∈M0
+f(Zi) − 1
+m1
+�
+j∈M1
+f(Zj)
+������
+H
+,
+9
+
+which represents a functional generalization of ∆ from Section 4.2, and where f ∈ H is an appropriate
+function chosen by the user. Then ∆H ≤ γF(Q0
+n0, Q1
+n1), and we can use the previous results with a
+few adjustments. We show in the supplementary material that
+P(∆H > δ) ≤ B(δ, D, C∗),
+where C∗ depends on the smoothness properties of H.
+5
+Practical implementation
+So far, we have given theoretical results that describe how algorithms under various function classes
+behave under the ideal balance assumption. As noted earlier, the ideal balance definition is strict
+but permits theoretical characterization of various algorithms. The question then naturally arises
+as to how to use the theoretical results from the previous sections in practice.
+Note one can view the metric in equation (3.1) as a multivariate balance metric, which differ-
+entiates it from many other balance metrics in the literature. Zhu et al. (2018) used (3.1), where
+F is a reproducing kernel Hilbert space, as a covariate balance diagnostic. There, they found that
+in certain situations, the diagnostic was more sensitive in finding covariate imbalances relative to
+univariate diagnostics as well as those based on the prognostic score (Hansen, 2008).
+Consider the problem of estimating the average causal effect among the treated. In practice, it
+is unlikely that ideal balance will hold for the treatment and control populations. That is to say,
+γF
+�
+Q0, Q1�
+̸= 0, unless treatment is randomized. Therefore, we wouldn’t be able to use Theorem 3.3
+in an observational study. However, a slight modification can be done for which the analysis remains
+largely the same.
+Let w ∈ W ⊂ Rn0 be a weight vector and define
+Q0
+w =
+1
+�
+i:Ti=0 wi
+�
+i:Ti=0
+wiδXi.
+The majority of methods in causal inference have as a goal to find appropriate weights w for which
+Q0
+w converges to Q∗ for some distribution Q∗ that indeed satisfies ideal balance with Q1. That is,
+for which γF
+�
+Q∗, Q1�
+= 0. In order for this modification to be feasible, we just need to modify our
+proof of Theorem 3.3 and include the convergence rates of Q0
+w to Q∗, which may change depending
+on the problem. Having done so, we continue in a parallel manner.
+Let f ∗ ∈ F represent a matching procedure with balance diagnostic
+∆ =
+����
+�
+fdQ0
+w −
+�
+fdQ1
+n1
+���� ,
+then, by the definition of γF,
+∆ ≤ γF
+�
+Q0
+w, Q1
+n1
+�
+.
+Therefore, if we can find weights for which Q0
+w converges to Q∗ and γF(Q∗, Q1) = 0, then we can
+bound the probability that ∆ exceeds some threshold δ.
+There are many methods for finding w ∈ W, the most straightforward being the inverse proba-
+bility of treatment weights,
+wi = Ti + e(Zi)(1 − Ti)
+1 − e(Zi)
+.
+Even heavily prescribed matching algorithms that are found throughout the causal inference litera-
+ture find some weights w ∈ W as described by Abadie and Imbens (2006). In one-to-one matching
+with replacement, let J (i) = {j1(i), j2(i), . . .} be the set of indices of units that are matched with
+10
+
+the unit i = 1, 2, . . . , n. If there are no ties, then J (i) = j(i). With ties present, which occur fre-
+quently especially with exact matching (see coarsened exact matching), J (i) might contain multiple
+matched indices. The matching process will allow us to produce weights for every unit by solving
+wi =
+�
+{l:Tl=1}
+I[i ∈ J (l)]
+#J (l)
+for all i ∈ {i : Ti = 0}
+where #J (i) denotes the cardinality of J (i).
+6
+Simulation Studies
+We perform a simulation study to evaluate the distribution of the distances reported in Section 4. We
+also examine their downstream consequences for estimating average treatment effects on the treated.
+There are two data generating mechanisms that we consider. In addition, we vary the sample size
+and the variance of the responses for a total of eight scenarios. We replicate each of these scenarios,
+described below, over 1000 iterations. We report the mean and Monte Carlo standard errors of the
+three distances (∆) examined in Section 4 (Table 1) along with the kernel density estimates for one
+representative scenario (Figure 1). We also evaluate the downstream effects of these ∆ statistics on
+the average treatment effect using one-to-one matching methods described by Abadie and Imbens
+(2006) implemented in the Matching package (Sekhon, 2008) (Tables 2 and 6).
+For i = 1, 2, . . . , n, let Zi1 ∼ N(1, 4), Zi2 ∼ Bin(1, 0.3), Zi3 ∼ N(0, 1), and Zi4 ∼ Bin(1, 0.5)
+where Ti denotes the binary treatment assignment. The conditional means of the outcomes for the
+treated, µ1(Zi), and the controls, µ0(Zi), are constructed as
+µ0(Zi) = 10 − 3Zi1 − Zi2 + Zi3 + 3Zi4 and
+µ1(Zi) = µ0(Zi) + 5 + 3Zi1 − Zi2 + Zi3 − 3Zi4.
+(6.1)
+We sample Ti ∼ Bin(1, 0.5) distribution. For i = 1, 2, . . . , n, we sample the counterfactual responses
+Yi(1) ∼ N[µ1(Zi), σ2] and Yi(0) ∼ N[µ0(Zi), σ2]. The observed outcome is Yi = TiYi(1) + (1 −
+Ti)Yi(0). We will refer to these conditions with the label “baseline”. For the error variance, we set
+σ2 ∈ {5, 10}.
+For the scenario labeled “sparse”, we include an additional set of covariates that ultimately do
+not affect the outcome. The outcomes are determined by the potential outcome models in (6.1), yet
+the methods we consider also account for the noise covariates Zi5 ∼ N(−1, 4), Zi6 ∼ Bin(1, 0.7),
+Zi7 ∼ N(0, 1), and Zi8 ∼ Bin(1, 0.5).
+As mentioned before, we test the three examples described in Section 4 in their ability to produce
+efficient, unbiased estimates of the average treatment effect of the treated.
+Linear discriminant
+analysis sets f to be the logit transformation of the fitted posterior probability that each unit
+receives treatment. The support vector machine examples use the distance that each point is from
+the resulting separating hyperplane assuming a linear kernel. Coarsened exact matching is performed
+similar to what is described in Iacus et al. (2011) and is implemented with the cem R package.
+Table 1 shows the results of our simulation experiment. Since balance is already achieved through
+randomization in this simulation, we also report the unmatched, crude estimate of the average
+causal effect for references. Here the value ∆ is the maximum absolute sample mean difference for
+the unweighted covariates.
+The values ∆ are not necessarily directly comparable in this example. They do represent the
+distributions whose tail probabilities we are bounding in theorem. The simulation serves to char-
+acterize some of the densities of these statistics so that we might better understand which values
+of δ are acceptable for the different balance methods in Section 4. We see that the values for ∆
+after coarsened exact matching were the most heavily concentrated, followed closely by the values
+11
+
+n
+σ2
+Scenario
+θ
+A
+B
+C
+D
+1000
+5
+baseline
+6.2
+0.11 (0.07)
+0.03 (0.02)
+0.02 (0.01)
+0.09 (0.04)
+1000
+5
+sparse
+6.2
+0.15 (0.07)
+0.01 (0.01)
+0.03 (0.02)
+0.13 (0.05)
+1000
+10
+baseline
+6.2
+0.12 (0.07)
+0.03 (0.02)
+0.02 (0.01)
+0.09 (0.05)
+1000
+10
+sparse
+6.2
+0.15 (0.07)
+0.01 (0.01)
+0.03 (0.02)
+0.13 (0.05)
+2000
+5
+baseline
+6.2
+0.08 (0.05)
+0.02 (0.01)
+0.01 (0.01)
+0.06 (0.03)
+2000
+5
+sparse
+6.2
+0.11 (0.05)
+0.01 (0.01)
+0.02 (0.01)
+0.09 (0.04)
+2000
+10
+baseline
+6.2
+0.08 (0.05)
+0.02 (0.01)
+0.01 (0.01)
+0.06 (0.03)
+2000
+10
+sparse
+6.2
+0.11 (0.05)
+0.01 (0.01)
+0.02 (0.01)
+0.09 (0.04)
+Table 1: Average and Monte Carlo standard error of ∆ found in the experiment. In this table,
+Method A is the unweighted estimate, Method B refers to coarsened exact matching, Method C to
+linear discriminant analysis, and Method D to support vector machines. Since both A and B create
+a vector valued ∆ we report the maximum.
+generated by linear discriminant analysis. The balance diagnostics from a support vector machine
+and from an unweighted comparison yielded considerably more dispersed values.
+One point of direct comparison that we may take between the different ∆ estimates is the
+downstream effects of the various balancing methods with estimating the average treatment effect.
+The purpose of this portion of the simulation study shows how the concentration of the distribution
+for ∆ may have little to do with the actual quality of the average treatment effect estimates - the
+ultimate result for causal inference. Although the concentration of the distribution for ∆ under
+coarsened exact matching was the most narrow among the other densities found for ∆ under linear
+discriminant analysis and support vector machines, the estimated average treatment effect is also
+the most biased.
+The Monte Carlo standard errors also seem to be greater than the other two
+balance methods. Linear discriminant analysis also conferred a narrow concentration of ∆ statistics
+yet produced the most efficient estimates of the average treatment effect, other than from the
+unweighted estimate which had the smallest Monte Carlo standard errors. This result is interesting
+because the unweighted diagnostics had the most dispersed values for ∆. This leads us to believe
+that the scale of the ∆ statistics must be carefully considered while evaluating balance to make some
+determination on which method is most suitable for evaluating treatment effects.
+n
+σ2
+Scenario
+θ
+A
+B
+C
+D
+1000
+5
+baseline
+6.2
+6.20 (0.33)
+6.24 (0.33)
+6.20 (0.42)
+6.20 (0.36)
+1000
+5
+sparse
+6.2
+6.20 (0.34)
+6.29 (1.24)
+6.21 (0.45)
+6.20 (0.39)
+1000
+10
+baseline
+6.2
+6.20 (0.37)
+6.22 (0.40)
+6.20 (0.47)
+6.20 (0.42)
+1000
+10
+sparse
+6.2
+6.19 (0.35)
+6.31 (1.46)
+6.20 (0.46)
+6.22 (0.42)
+2000
+5
+baseline
+6.2
+6.19 (0.24)
+6.21 (0.24)
+6.20 (0.29)
+6.20 (0.25)
+2000
+5
+sparse
+6.2
+6.20 (0.23)
+6.34 (0.71)
+6.21 (0.29)
+6.21 (0.26)
+2000
+10
+baseline
+6.2
+6.21 (0.25)
+6.21 (0.26)
+6.19 (0.32)
+6.21 (0.28)
+2000
+10
+sparse
+6.2
+6.21 (0.25)
+6.38 (0.79)
+6.21 (0.31)
+6.21 (0.27)
+Table 2: Summary of simulation estimates and Monte Carlo standard errors. The simulation sce-
+narios corresponding to ”baseline” and ”sparse” are described in further detail in Section 6. Here, θ
+refers to the population average treatment effect among the treated. In this table, Method A is the
+unweighted estimate, Method B refers to coarsened exact matching, Method C is linear discriminant
+analysis, and Method D is support vector machines.
+12
+
+Figure 1: Kernel Densities of the ∆ balancing statistics for the baseline scenario with n = 1000 and
+σ2 = 10. The solid line is the distribution from the unweighted estimates, the dashed line is the
+distribution for coarsened exact matching, the dotted line is the distribution for the linear propensity
+score, and the dotted-dashed line for the support vector machine examples.
+n
+σ2
+Scenario
+θ
+A
+B
+C
+D
+1000
+5
+baseline
+6.2
+0.952
+0.937
+0.941
+0.929
+1000
+5
+sparse
+6.2
+0.944
+0.955
+0.934
+0.917
+1000
+10
+baseline
+6.2
+0.941
+0.918
+0.935
+0.912
+1000
+10
+sparse
+6.2
+0.955
+0.950
+0.951
+0.931
+2000
+5
+baseline
+6.2
+0.931
+0.945
+0.937
+0.923
+2000
+5
+sparse
+6.2
+0.956
+0.945
+0.939
+0.918
+2000
+10
+baseline
+6.2
+0.959
+0.936
+0.926
+0.928
+2000
+10
+sparse
+6.2
+0.953
+0.946
+0.948
+0.935
+Table 3: Summary of coverage probabilities from the simulation experiment. The simulation scenar-
+ios corresponding to ”baseline”, ”interaction”, ”positivity”, and ”sparse” are described in further
+detail in Section 6. Here, θ refers to the population average treatment effect among the treated.
+In this table, Method A is the unweighted estimate, Method B refers to coarsened exact matching,
+Method C to linear discriminant analysis, and Method D to support vector machines.
+Acknowledgments
+The authors would like to acknowledge funding support from the following sources: the National
+Institutes of Health, the National Science Foundation, the Veterans Administration and the Grohne-
+13
+
+Kernel Densities of Delta from a Monte-Carlo Simulation
+8
+4
+8
+Density
+11,
+:t
+: 1
+: 1
+0.0
+0.1
+0.2
+0.3
+0.4
+0.5
+DeltaStepp Endowment from the University of Colorado Cancer Center.
+Appendix
+Proof of theorem 3.3
+We will use P and Q instead of Q0 and Q1 to ease symbolic burden on the reader.
+Proof. By definition of γ:
+γ(Pn0, Qn1)
+=
+sup
+f∈F
+����
+�
+fdPn0 −
+�
+fdQn1
+����
+=
+sup
+f∈F
+����
+�
+fdPn0 ±
+�
+fdP ±
+�
+fdQ −
+�
+fdQn1
+����
+≤
+sup
+f∈F
+����
+�
+fdPn0 −
+�
+fdP −
+�
+fdQn1 +
+�
+fdQ
+���� + sup
+f∈F
+����
+�
+fdP −
+�
+fdQ
+����
+=
+sup
+f∈F
+����
+�
+fdPn0 −
+�
+fdP −
+�
+fdQn1 +
+�
+fdQ
+���� ,
+since γ(P, Q) = 0. Using elementary probability arguments, we have
+Pr{γ(Pn0, Qn1) > δ}
+=
+Pr
+�
+sup
+f∈F
+����
+�
+fdPn0 −
+�
+fdP −
+�
+fdQn1 +
+�
+fdQ
+���� > δ
+�
+=
+Pr
+�
+sup
+f∈F
+����
+1
+√n0
+GP
+n0(f) −
+1
+√n1
+GQ
+n1(f)
+���� > δ
+�
+≤
+Pr
+�
+sup
+f∈F
+|GP
+n0(f)| > √n0δ/2
+�
++ Pr
+�
+sup
+f∈F
+|GQ
+n1(f)| > √n1δ/2
+�
+,
+where GP
+n0(f) and GQ
+n1(f) represent the F-indexed empirical processes of P and Q, respectively.
+Applying Theorem 2.14.9 in Van Der Vaart and Wellner (1996), we can bound each of the terms
+as follows:
+Pr
+�
+sup
+f∈F
+|GP
+n0(f)| > √n0δ/2
+�
+<
+�D√n0δ
+2
+√
+C
+�C
+exp(−n0δ2/2)
+Pr
+�
+sup
+f∈F
+��GQ
+n1(f)
+�� > √n1δ/2
+�
+<
+�D√n1δ
+2
+√
+C
+�C
+exp(−n1δ2/2),
+where D is a constant depending only on K. Plugging these two bounds into (6.2) concludes the
+proof.
+14
+
+Proof of Lemma 3.6
+Proof. Define γi = γFi(Pi, Qi). Then:
+Pr
+��
+i
+γi > δ
+�
+= 1 − Pr
+��
+i
+γi < δ
+�
+≤ 1 − Pr(γi < δ/d ∀i)
+= Pr(∃ i ∋ γi > δ/d)
+≤
+�
+i
+Pr(γi > δ/d)
+≤
+�
+i
+B(δ/d, Di, Ci),
+where we have used the union bound in the second inequality.
+Proof of Lemma 3.7
+Proof. Assume γFi(µ, ν) = 0 for all i. Then
+γFπ(µ, ν) = sup
+f π∈Fπ
+����
+�
+f πdµ −
+�
+f πdν
+����
+= max
+ℓ
+sup
+f∈F
+����
+�
+πℓ ◦ fdµ −
+�
+πℓ ◦ fdν
+����
+= max
+ℓ
+sup
+f∈F
+����
+�
+fℓdµ −
+�
+fℓdν
+����
+= max
+ℓ
+sup
+fℓ∈Fℓ
+����
+�
+fℓdµ −
+�
+fℓdν
+����
+= max
+ℓ
+γFℓ(µ, ν) = 0.
+Conversely, assuming γFπ(µ, ν) = 0 yields
+γFi(µ, ν) = sup
+fℓ∈Fℓ
+����
+�
+fℓdµ −
+�
+fℓdν
+����
+= sup
+f∈F
+����
+�
+πℓ ◦ fdµ −
+�
+πℓ ◦ fdν
+����
+≤ max
+ℓ
+sup
+f∈F
+����
+�
+πℓ ◦ fdµ −
+�
+πℓ ◦ fdν
+����
+= γFπ(µ, ν) = 0.
+This proves the first two equivalences. The third one is a byproduct of the proof.
+15
+
+Proof of Corollary 3.8
+Proof. To avoid cumbersome notation, let v =
+1
+n0
+�n0
+j=1 f ∗(X0
+j ) −
+1
+n1
+�n1
+j=1 f ∗(X1
+j ) and note vℓ =
+1
+n0
+�n0
+j=1 f ∗
+ℓ (X0
+j ) −
+1
+n1
+�n1
+j=1 f ∗
+ℓ (X1
+j ), then:
+Pr
+�
+∥v∥ℓp > δ
+�
+= Pr
+�
+∥v∥p
+ℓp > δp�
+= Pr
+��
+ℓ
+|vℓ|p > δp
+�
+≤ Pr
+��
+ℓ
+γFℓ(Q0
+n0, Q1
+n1)p > δp
+�
+≤
+�
+ℓ
+Pr
+�
+γFℓ(Q0
+n0, Q1
+n1)p > δp/d
+�
+=
+�
+ℓ
+Pr
+�
+γFℓ(Q0
+n0, Q1
+n1) > δ/d1/p�
+≤
+�
+ℓ
+B(δ/d1/p, D∗, C∗) = dB(δ/d1/p, D∗, C∗),
+where the second and third inequalities follow from a slight variation of Lemma 3.6 and application
+of Lemma 3.7. For the ℓ∞ case we have:
+Pr
+�
+∥v∥ℓ∞ > δ
+�
+≤ Pr
+�
+max
+ℓ
+|γℓ| > δ
+�
+≤
+�
+ℓ
+B(δ, D∗, C∗),
+concluding the proof.
+Balance for coarsening functions
+We will show the coarsened exact matching procedure belongs to a class of functions with tractable
+Vapnik-Chervonenkis dimension. Consider the set S of partitions with a fixed number of elements
+R. For a given partition S ∈ S, such that S = {s1, . . . , sR} define f kα
+S
+to be:
+f kα
+S (x) =
+R
+�
+i=1
+kiαiχsi(x),
+where ki ≤ k for k a constant, χsi is the indicator function of si, and α := (α1, . . . , αR) is a binary
+vector, this is, αi ∈ {0, 1} for each i. In words, if x is found in si, f will return a scaled version of x
+if αi is 1 and zero otherwise.
+Now let F := {f kα
+S }S∈S,α∈A,k≤κ, where A is the set of all binary vectors of size R and κ ∈ R.
+Hence, the coarsened exact matching procedure belongs to this class of functions, since in that case
+αi indicates if there are at least two members of different groups in stratum si. For any sample
+point x, the weights are usually chosen in the following manner: If x is a treated unit, w1
+i = 1,
+otherwise, w0
+i = (ms
+1/m1)/(ms
+0/m0), where s is the stratum x belongs to. Letting ki = wℓ
+inℓ/mℓ
+appropriately weighs matched samples. We just need to add the mild assumption that the ratio of
+sample to matched size per stratum s does not grow faster than √κ, that is, nℓ/ms
+ℓ ≤ √κ for all
+s ∈ S, because in that case w0
+i ≤ m0/ms
+0 ≤ n0/ms
+0 ≤ √κ and nℓ/mℓ ≤ √κms
+ℓ/mℓ ≤ √κ, so ki ≤ κ.
+Finally, notice that any similar function with a smaller partition size can be expressed by a function
+in F, so we can consider variable partition size as long as it does not exceed a reasonable bound R.
+16
+
+For any set of points of size R there is a partition S containing one point in a different element,
+and therefore an α that can assign each point arbitrarily to either 0 or 1. So F shatters such set.
+However, if we add an extra point, and since the number of partitions is constrained, it would have
+to share partition element with a previous point, and so assignment under f kα
+s . So the Vapnik-
+Chervonenkis dimension of F is R. Finally, let g(Zℓ) = Qℓ
+nℓ, where Qℓ
+nℓ is the empirical distribution
+of the sample Zℓ for group ℓ. Let k∗ be chosen as above and let (S∗, α∗) be the particular partition
+and binary vector used for coarsened exact matching. Then, for the ℓth component we get:
+������
+1
+m0
+�
+i∈M0
+w0
+i Z0
+i,ℓ − 1
+m1
+�
+j∈M1
+w1
+jZ1
+j,ℓ
+������
+=
+������
+1
+n0
+n0
+�
+i=1
+f k∗α∗
+S∗,ℓ (Z0
+i ) − 1
+n1
+n1
+�
+j=1
+f k∗α∗
+S∗,ℓ (Z1
+j )
+������
+≤ sup
+fℓ∈F∗
+������
+1
+n0
+n0
+�
+i=1
+fℓ(Z0
+i ) − 1
+n1
+n1
+�
+j=1
+fℓ(Z1
+j )
+������
+= γF∗(Q0
+n0, Q1
+n1) = γF∗(g(Z0), g(Z1)).
+Thus, the discrepancy among the matched samples per dimension is bounded by the γF∗ distance
+of the unmatched samples. Finally, the function h(x) := κx is an envelope function of F and has
+norm ∥h∥L2(µ) < ∞ as long as we assume compact domain, which is OK to do for most coarsened
+exact matching cases. Then, by Theorem 2.6.7 of Van Der Vaart and Wellner (1996):
+sup
+µ N(ϵ, F, L2(µ)) ≤
+�K
+ϵ
+�C∗
+,
+for some constant K and where C∗ = 2(R − 1).
+This leads us to our final result: Assume ideal balance on the population probabilities holds for
+γFπ, then, for the ℓth component we have:
+Pr
+�
+�
+������
+1
+m0
+�
+i∈M0
+w0
+i Z0
+i,ℓ − 1
+m1
+�
+j∈M1
+w1
+jZ1
+j,ℓ
+������
+> δ
+�
+� ≤ B(δ, D, C∗).
+If we are interested in the ℓp norm of the full vector instead, then, by Corollary 3.8:
+Pr
+�
+�
+�
+�
+�
+������
+1
+m0
+�
+i∈M0
+w0
+i Z0
+i − 1
+m1
+�
+j∈M1
+w1
+jZ1
+j
+������
+ℓp
+> δ
+�
+�
+�
+�
+�
+≤ dB(δ/d1/p, D, C∗),
+for finite p ≥ 1. While
+Pr
+�
+�
+�
+������
+1
+m0
+�
+i∈M0
+w0
+i Z0
+i − 1
+m1
+�
+j∈M1
+w1
+jZ1
+j
+������
+ℓ∞
+> δ
+�
+�
+� ≤ dB(δ, D, C∗).
+Balance using propensity scores
+Recall e(Z) = P(T = 1 | Z), and that we are assuming Z | T = ℓ ∼ N(µℓ, Σ). Let pℓ be the
+probability density function of N(µℓ, Σ), that is, the gaussian density, then by the density version
+of Bayes’ Theorem we have
+p(T = 1 | Z = z) =
+p1P(T = 1)
+p1P(T = 1) + p0P(T = 0).
+17
+
+Therefore, we can express the logit of e(Z) as
+logit(e(Z)) = log
+�
+e(Z)
+1 − e(Z)
+�
+= log
+�p1P(T = 1)
+p0P(T = 0)
+�
+.
+Now define Lk := logit(e(Zk)), then the matching procedure is based on the difference |Li − Lj|.
+Given the above computation and after a few straightforward steps we get
+|Li − Lj| =
+��(µ1 − µ0)T Σ−1(Zi − Zj)
+��
+= |f ∗(Zi) − f ∗(Zj)| ,
+where f ∗(x) = wT x for w ∈ Rp.
+Notice the vector w is the same as the one used for linear
+discriminant analysis so, adding an offset parameter, it will be useful to think of f ∗ as a hyperplane.
+Let M j
+0 be the control units assigned to treatment unit j. We make the assumption that there
+is a fixed number of assigned controls to each treatment, and so m0 = |M j
+0|m1. Then
+∆ :=
+������
+1
+m1
+�
+j∈M1
+logit(ej) − 1
+m0
+�
+i∈M0
+logit(ei)
+������
+=
+������
+1
+m1
+�
+j∈M1
+Lj −
+�
+j∈M1
+1
+m0
+�
+i∈M j
+0
+Li
+������
+=
+������
+�
+j∈M1
+�
+� 1
+m1
+Lj − 1
+m0
+�
+i∈M j
+0
+Li
+�
+�
+������
+=
+������
+�
+j∈M1
+�
+� 1
+m1
+�
+i∈M j
+0
+Lj
+|M j
+0|
+− 1
+m0
+�
+i∈M j
+0
+Li
+�
+�
+������
+=
+������
+�
+j∈M1
+�
+i∈M j
+0
+�
+Lj
+m1|M j
+0|
+− Li
+m0
+�������
+=
+������
+�
+j∈M1
+�
+i∈M j
+0
+1
+m0
+(Lj − Li)
+������
+=
+������
+�
+j∈M1
+�
+i∈M j
+0
+1
+m0
+(f ∗(Zj) − f ∗(Zi))
+������
+=
+������
+1
+m1
+�
+j∈M1
+f ∗(Zj) − 1
+m0
+�
+i∈M0
+f ∗(Zi)
+������
+.
+That is, we can express the difference of means of logits in terms of the difference of means of the
+discriminant functions. Let p be the dimension of the covariates, and let F be the collection of
+p-dimensional hyperplanes, notice f ∗ ∈ F. The Vapnik-Chervonenkis dimension of F is known to
+be p + 1 (Mohri et al., 2018). We would like to bound ∆ in terms of γ but we first need some
+adjustments to f ∗.
+The matching procedure determines a set ZM = {Zk | k ∈ M} of matched samples, where
+M = M0 ∪ M1. By the Gaussian assumption the Zs are sampled from a Gaussian mixture so the
+probability of two sample points being the same is zero. Hence there is an ϵ > 0 such that for all
+18
+
+k ∈ M, Z ∩ Bϵ(Zk) = {Zk}, that is, each ϵ ball centered around a matched sample does not contain
+any other sample point (here Z is the sample set). Let Sϵ = ∪kBϵ(Zk). Note Sϵ is a measurable
+set. Let βSϵ(x) := xχSϵ(x), this function maps points to zero if unmatched and to themselves if
+matched. Furthermore, let βℓ(x) := mℓ
+nℓ χMℓ(x) + χM C
+ℓ (x), for ℓ ∈ {0, 1}. Each βℓ scales elements in
+Mℓ by the factor mℓ
+nℓ and leaves the rest untouched.
+Notice f ∗
+M := f ∗ ◦ β1 ◦ β0 ◦ βSϵ sends Zk to mℓ
+nℓ wT Zk if k ∈ Mk and to 0 otherwise. Then we can
+express ∆ as
+∆ =
+������
+1
+m1
+�
+j∈M1
+f ∗(Zj) − 1
+m0
+�
+i∈M0
+f ∗(Zi)
+������
+=
+������
+1
+n1
+n1
+�
+j=1
+f ∗
+M(Zj) − 1
+n0
+n0
+�
+i=1
+f ∗
+M(Zi)
+������
+.
+Now, consider the set FM := {f ◦β1◦β0◦βS|f ∈ F, S ∈ Σ}, where Σ is the set of measurable sets
+according to the distribution of the Zs. The Vapnik-Chervonenkis dimension for FM is the same as
+that of F, that is, p + 1. To see this we notice that the standard derivation for the hyperplane case
+involves shattering the standard basis B in Rp. With probability one, no sample point will equal a
+standard basis vector, so there is an ϵ′ > 0 for which we can create a set s = ∪x∈BBϵ′(x) such that
+s ∈ Σ and no sample point is in s. Considering the functions {fν} in F used to shatter B and using
+s, we can use the functions {fν ◦ β1 ◦ β0 ◦ βs} in FM to also shatter B. So the Vapnik-Chervonenkis
+dimension is at least p + 1. Since the functions β1, β0, and βS are either zero or a scaled identity,
+we don’t get any complexity and the dimension is no larger than p + 1, so it is indeed p + 1. For the
+envelope function, we can choose h(x) =< we, x >. The norm of we must be large enough to keep
+a p + 1 Vapnik-Chervonenkis dimension. Since the vectors used to ensure such a dimension have
+norm p + 1, the norm of we must be at least p + 1. So we can choose any large constant C > p + 1.
+Since we are interested in vectors of the form w = Σ−1∆µ, we have ∥w∥ ≤ ∥S−1∥F ∥∆µ∥2, so the
+user has to choose constants that bound each of these norms. Also, we must assume the covariates
+themselves are bounded, this ensures a finite norm for h.
+Finally, we have
+∆ =
+������
+1
+n1
+n1
+�
+j=1
+f ∗
+M(Zj) − 1
+n0
+n0
+�
+i=1
+f ∗
+M(Zi)
+������
+≤ sup
+f∈FM
+������
+1
+n1
+n1
+�
+j=1
+f(Zj) − 1
+n0
+n0
+�
+i=0
+f(Zi)
+������
+= γFM (Q0
+n0, Q1
+n1).
+Assuming Ideal Balance on the population probabilities, and applying Theorem 2.6.7 of Van Der Vaart
+and Wellner (1996) in conjunction with Theorem 3.3, yields
+Pr{∆ > δ} ≤ B(δ, D, 2p).
+Covering number bound for Reproducing Kernel Hilbert Spaces
+We refer the reader to Wahba (1990); Berlinet and Thomas-Agnan (2011); Steinwart and Christmann
+(2008) for nice overviews on reproducing kernel Hilbert spaces. Roughly speaking, a mapping k :
+X × X → R is said to be the reproducing kernel associated to the reproducing kernel Hilbert space
+H if it satisfies the following properties: (a) k(·, x) ∈ H for any x ∈ X; (b) f(x) = ⟨f, k(·, x)⟩H for
+all f ∈ H and x ∈ X. Property (b) is commonly referred to as the reproducing property.
+19
+
+To apply Theorem 3.3 to the reproducing kernel case, we will need to directly bound the covering
+number based on arguments different from Vapnik-Chervonenkis theory. Define the space
+Hm
+q (Rp) = {f ∈ Lq(Rp) | Djf ∈ Lq(Rp) ∀j ∈ {1, . . . , m}; ∥f∥q < ∞},
+where
+∥f∥q =
+�
+0≤|α|≤s
+∥Dαf∥Lq
+and Dα denotes partial derivatives in the sense of distributions. Then as a consequence of Theorem
+1 of Nickl and P¨otscher (2007), if m − q/p > 0, then
+N(ϵ, H, ∥ · ∥q) ≤ b1ϵ−q,
+while if m − q/p < 0,
+N(ϵ, H, ∥ · ∥q) ≤ b2ϵ−p/m,
+Based on this result, Theorem 3.3 can then be applied to prove a convergence rate under ideal
+balance.
+Note that this does not cover the Gaussian kernel case, because the Gaussian kernel
+is infinitely differentiable, so the space Hm
+q (Rp) does not apply. For the reader interested in the
+Gaussian case, we refer them to the recent paper by Steinwart and Fischer (2020).
+References
+Abadie, A. and G. W. Imbens (2006). Large sample properties of matching estimators for average
+treatment effects. Econometrica 74(1), 235–267.
+Abadie, A. and G. W. Imbens (2011). Bias-corrected matching estimators for average treatment
+effects. Journal of Business & Economic Statistics 29(1), 1–11.
+Abadie, A. and G. W. Imbens (2016). Matching on the estimated propensity score. Economet-
+rica 84(2), 781–807.
+Baudat, G. and F. Anouar (2000). Generalized discriminant analysis using a kernel approach. Neural
+computation 12(10), 2385–2404.
+Berlinet, A. and C. Thomas-Agnan (2011). Reproducing kernel Hilbert spaces in probability and
+statistics. Springer Science & Business Media.
+Chan, K. C. G., S. C. P. Yam, and Z. Zhang (2016). Globally efficient non-parametric inference
+of average treatment effects by empirical balancing calibration weighting. Journal of the Royal
+Statistical Society: Series B (Statistical Methodology) 78(3), 673–700.
+Chervonenkis, A. and V. Vapnik (1971). Uniform convergence of the frequencies of occurrence of
+events to their probabilities(uniform convergence of frequencies of events in independent tests
+sequence to probabilities of occurrence). Teoriia Veroiatnostei I Ee Primeneniia 16, 264–279.
+Hainmueller, J. (2012). Entropy balancing for causal effects: A multivariate reweighting method to
+produce balanced samples in observational studies. Political Analysis 20(1), 25–46.
+Hansen, B. B. (2008). The prognostic analogue of the propensity score. Biometrika 95(2), 481–488.
+Hazlett, C. (2016). Kernel balancing: A flexible non-parametric weighting procedure for estimating
+causal effects.
+Ho, D. E., K. Imai, G. King, and E. A. Stuart (2007). Matching as nonparametric preprocessing for
+reducing model dependence in parametric causal inference. Political analysis 15(3), 199–236.
+20
+
+Holland, P. W. (1986). Statistics and causal inference. Journal of the American statistical Associa-
+tion 81(396), 945–960.
+Iacus, S. M., G. King, and G. Porro (2011). Multivariate matching methods that are monotonic
+imbalance bounding. Journal of the American Statistical Association 106(493), 345–361.
+Imai, K. and M. Ratkovic (2014).
+Covariate balancing propensity score.
+Journal of the Royal
+Statistical Society: Series B (Statistical Methodology) 76(1), 243–263.
+Imbens, G. W. and D. B. Rubin (2015). Causal inference in statistics, social, and biomedical sciences.
+Cambridge University Press.
+Kallus, N. (2020). Generalized optimal matching methods for causal inference. Journal of Machine
+Learning Research 21(62), 1–54.
+Kosorok, M. R. (2007). Introduction to empirical processes and semiparametric inference. Springer
+Science & Business Media.
+Mohri, M., A. Rostamizadeh, and A. Talwalkar (2018). Foundations of machine learning. MIT
+press.
+Neyman, J. (1923). Sur les applications de la th´eorie des probabilit´es aux experiences agricoles:
+Essai des principes. Roczniki Nauk Rolniczych 10, 1–51.
+Nickl, R. and B. M. P¨otscher (2007). Bracketing metric entropy rates and empirical central limit
+theorems for function classes of besov-and sobolev-type. Journal of Theoretical Probability 20(2),
+177–199.
+Rosenbaum, P. R. and D. B. Rubin (1983). The central role of the propensity score in observational
+studies for causal effects. Biometrika 70(1), 41–55.
+Rubin, D. B. (1974). Estimating causal effects of treatments in randomized and nonrandomized
+studies. Journal of educational Psychology 66(5), 688.
+Rubin, D. B. (1976). Multivariate matching methods that are equal percent bias reducing, i: Some
+examples. Biometrics, 109–120.
+Rubin, D. B., E. A. Stuart, et al. (2006). Affinely invariant matching methods with discriminant
+mixtures of proportional ellipsoidally symmetric distributions. The Annals of Statistics 34(4),
+1814–1826.
+Rubin, D. B. and N. Thomas (1992). Affinely invariant matching methods with ellipsoidal distribu-
+tions. The Annals of Statistics, 1079–1093.
+Salimi, B. and D. Suciu (2016). Zaliql: A sql-based framework for drawing causal inference from big
+data. arXiv preprint arXiv:1609.03540.
+Sekhon, J. S. (2008). Multivariate and propensity score matching software with automated balance
+optimization: the matching package for r. Journal of Statistical Software, Forthcoming.
+Steinwart, I. and A. Christmann (2008). Support vector machines. Springer Science & Business
+Media.
+Steinwart, I. and S. Fischer (2020). A closer look at covering number bounds for gaussian kernels.
+Journal of Complexity, 101513.
+Stuart, E. A. (2010). Matching methods for causal inference: A review and a look forward. Statistical
+science: a review journal of the Institute of Mathematical Statistics 25(1), 1.
+21
+
+Van Der Vaart, A. W. and J. A. Wellner (1996). Weak convergence. In Weak convergence and
+empirical processes, pp. 16–28. Springer.
+Wahba, G. (1990). Spline Models for Observational Data. Society for Industrial and Applied Math-
+ematics.
+Wang, T., M. Morucci, M. U. Awan, Y. Liu, S. Roy, C. Rudin, and A. Volfovsky (2017). Flame: A
+fast large-scale almost matching exactly approach to causal inference.
+Wang, Y. and J. R. Zubizarreta (2019). Minimal dispersion approximately balancing weights: asymp-
+totic properties and practical considerations. Biometrika.
+Wong, R. K. and K. C. G. Chan (2018). Kernel-based covariate functional balancing for observational
+studies. Biometrika 105(1), 199–213.
+Zhu, Y., J. S. Savage, and D. Ghosh (2018). A kernel-based metric for balance assessment. Journal
+of causal inference 6(2).
+Zolotarev, V. M. (1984).
+Probability metrics.
+Theory of Probability & Its Applications 28(2),
+278–302.
+Zubizarreta, J. R. (2015). Stable weights that balance covariates for estimation with incomplete
+outcome data. Journal of the American Statistical Association 110(511), 910–922.
+22
+
diff --git a/b9AyT4oBgHgl3EQf-PqC/content/tmp_files/load_file.txt b/b9AyT4oBgHgl3EQf-PqC/content/tmp_files/load_file.txt
new file mode 100644
index 0000000000000000000000000000000000000000..ce207cee0722e6b2ec9ff6d8ace03b44b8f093af
--- /dev/null
+++ b/b9AyT4oBgHgl3EQf-PqC/content/tmp_files/load_file.txt
@@ -0,0 +1,1108 @@
+filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf,len=1107
+page_content='An empirical process framework for covariate balance in causal inference Efr´en Cruz Cort´es Michigan Institute for Data Science Center for the Study of Complex Systems University of Michigan encc@umich.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='edu Kevin Josey Department of Biostatistics Harvard T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Chan School of Public Health kjosey@hsph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='harvard.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='edu Fan Yang Department of Biostatistics and Informatics Colorado School of Public Health fan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='yang@cuanschutz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='edu Debashis Ghosh Department of Biostatistics and Informatics Colorado School of Public Health debashis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='ghosh@cuanschutz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='edu Abstract We propose a new perspective for the evaluation of matching procedures by considering the complexity of the function class they belong to.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Under this perspective we provide theoretical guarantees on post-matching covariate balance through a finite sample con- centration inequality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' We apply this framework to coarsened exact matching as well as matching using the propensity score and suggest how to apply it to other algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Simulation studies are used to evaluate the procedures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' keywords: Causal effects, empirical distribution function, entropy metric, superpopulation, tail inequality, Vapnik-Chervonenkis dimension.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' 1 Introduction Causal inference is a central goal for outcomes and policy research, particularly in the medical field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Among the many topics in this broad field of study are methods for evaluating treatment effects with non-randomized data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' There is an abundance of observational data in nearly every discipline of science.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' However, bias induced by confounding is inherent in observational studies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' In this context, the researcher must account for every potential confounder in some way before they can establish causality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' While randomization remains the gold-standard for inference, as there is no confounding by definition, randomizing individuals into treatment groups is often cost prohibitive and sometimes unethical for certain study designs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Under the potential outcomes framework (Neyman, 1923;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Rubin, 1974), Rosenbaum and Rubin (1983) were able to describe how the propensity score plays a key role in causal effect estimation and inference with observational data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' The propensity score is defined as the probability of receiving a treatment given a set of measured covariates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Under strong ignorabiligy assumption, the propensity score removes bias attributable to confounding due to its property as a balancing score (Rosenbaum and Rubin, 1983).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' With this result in mind, numerous methods for causal effect estimation were 1 arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='00889v1 [math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='ST] 2 Jan 2023 subsequently developed around the propensity score, with covariate balance serving as the primary objective (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=', Imai and Ratkovic (2014);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Zubizarreta (2015);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Chan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' (2016)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' However, the results presented by Rosenbaum and Rubin (1983) about the propensity score are derived in an asymptotic setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' This means that estimates of the propensity score may not adequately balance the covariate distribution in finite settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Therefore, many methods are resolved by iterating between fitting a model for the propensity score and evaluating balance diagnostics on the propensity score adjusted covariates before estimating the treatment effect of interest.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Some methods for evaluating balance diagnostics have been proposed by Ho et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' (2007) and Sekhon (2008).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' The propensity score literature has mostly diverged into two overlapping yet distinct domains - one that uses the propensity score to derive balancing weights (Hainmueller, 2012;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Imai and Ratkovic, 2014;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Chan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=', 2016) and the other that uses a balancing score, such as the propensity score, to construct a matched cohort.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Recently, a multivariate matching approach using coarsened values of the observed covariates was developed by Iacus et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' (2011).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' They refer to their algorithm as coarsened exact matching.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' One of the primary aims of their method was to eliminate the iterative step of re-matching participants until an acceptable amount of balance is achieved.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Coarsened exact matching is quite simple in nature and proceeds using the following high-level heuristic: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' For each confounding variable, coarsen it into a certain number of categories;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Create strata based on the possible combinations of the coarsened values;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Compute a causal effect by comparing the outcomes of the treatment groups within the strata and adjusting for the stratum effect appropriately.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' The theoretical justification provided by Iacus et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' (2011) for coarsened exact matching is a concept they term monotonic imbalance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' They show that bounding the distance between confounders to be small leads to matching procedures that are more flexible than procedures based on the equal percent bias reduction theory developed by Rubin and collaborators (Rubin, 1976;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Rubin and Thomas, 1992;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Rubin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=', 2006).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' One of the main advantages of coarsened exact matching is that it becomes amenable to large-scale database querying approaches to peforming causal inference: see Salimi and Suciu (2016) as well as Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' (2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' However, fewer technical results exist for matching estimators than for other approaches, such as inverse probability weighting estimators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Abadie and Imbens (2006) have studied the large-sample asymptotics of matching estimators and found that in general, matching-based estimators of average causal effect did not have the usual n1/2 convergence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' The intuition is that the matching algorithm introduces a bias into causal effect estimation that did not vanish asymptotically.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' This bias term also increased with the number of confounders.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Bias-corrected estimators have been proposed by Abadie and Imbens (2011).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Abadie and Imbens (2016) performed a theoretical study of the asymptotic behavior of average causal effect estimators that match using the estimated propensity score.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Conceptually, achieving covariate balance is a multivariate concept.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' If we let L(Z | T = 0) and L(Z | T = 1) denote the probability laws for the confounders conditional on treatment status then, ideally, as in the case of perfect randomization, these distributions are equal in some sense.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' We refer to this sense of equality as covariate balance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Most covariate balance methods do not take the joint distribution of confounders into account but rather seek to match moments of the marginal distributions for the confounders.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' For example, Imai and Ratkovic (2014) proposed matching the first and second moments of covariates in their algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Practically, one-dimensional diagnostics such as mean comparisons of confounders between treatment groups or Kolmogorov-Smirnov statistics are used to evaluate balance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Wang and Zubizarreta (2019) have argued that due to the inherent complexity in attempting to achieve multivariate balance, one should instead strive to achieve approximate balance between confounders.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' In this paper, we propose a new theoretical approach to evaluating and understanding covariate balance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' We introduce a distance metric to assess how close two multivariate distributions are from 2 each other and define covariate balance as having zero distance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' This metric is defined in terms of the function family the matching procedure belongs to.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Subsequent assessment of balance relies on understanding the behavior of the function classes in question.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' We demonstrate the following in the current paper: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' The use of function classes fits naturally with the use of probability metrics (Zolotarev, 1984) for comparing probability laws and in this instance, multivariate distributions for confounders conditional on treatment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Results from empirical process theory (Van Der Vaart and Wellner, 1996;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Kosorok, 2007) can subsequently be used to study the behavior of function classes and to make probabilistic statements on the rates of convergence of matching procedures under ideal balance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Ideal balance provides a new theoretical out-of-sample justification for the methodology of Iacus et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' (2011) and can be used for the evaluation of other algorithmic strategies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Based on the framework, one can view the techniques in this paper as being akin to developing a scal- able strategy for achieving covariate balance that has relatively low complexity from the viewpoint described in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' 2 Background and Preliminaries 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='1 Data Structures and Causal Estimands Let the data be represented as (Yi, Ti, Zi), i = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' , n, a random sample from the triple (Y, T, Z), where Y denotes the response of interest, T denotes the treatment group, and Z is a p-dimensional vector of covariates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' We assume that T takes values in {0, 1}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' We now briefly review the potential outcomes framework (Rubin, 1974;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Holland, 1986).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Let {Y (0), Y (1)} denote the potential outcomes for all n subjects, and the observed response be related to the potential outcomes by Y = (1 − T)Y (0) + TY (1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' In the potential outcomes framework, causal effects are defined as within-individual contrasts based on the potential outcomes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' One popularly used estimand is the average causal effect, defined as ACE = 1 n n � i=1 (Yi(1) − Yi(0)) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Many assumptions are needed for performing valid causal inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' These include the con- sistency assumption, the treatment positivity assumption, and the strongly ignorable treatment assumption (Rosenbaum and Rubin, 1983), defined as T ⊥ {Y (0), Y (1)} | Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='1) Assumption (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='1) means that treatment assignment is conditionally independent of the set of po- tential outcomes given the covariates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Treatment positivity refers to 1 > P(T = 1 | Z) > 0 for all values of Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Thus, the intuition is that any individual can potentially receive either treatment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Finally, the consistency assumption ensures that the observed outcome and the potential outcome under the observed treatment coincide.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' As described recently by Imbens and Rubin (2015), causal inference proceeds by modelling the assignment mechanism using observed covariates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' A quantity that naturally arises from this mod- elling is the propensity score (Rosenbaum and Rubin, 1983), the probability of receiving treatment given confounders.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' The propensity score is defined as e(Z) = P(T = 1 | Z).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' 3 Given the treatment ignorability assumption in (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='1), it also follows by Theorem 3 of Rosenbaum and Rubin (1983) that treatment is strongly ignorable given the propensity score, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' T ⊥ {Y (0), Y (1)} | e(Z).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Based on these assumptions and definitions, we can formulate causal inference using the following approach: (a) define an appropriate causal estimand;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' (b) formulate a propensity score model;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' (c) check for covariate balance;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' (d) if (c) holds, estimate the causal estimand by conditioning on the propensity scores.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' We note that steps (b) and (c) tend to be iterative in practice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' While the results in this paper pertain to propensity-matched analyses, they apply to more general matching strategies as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='2 Previous results on covariate balance In terms of covariate balance, a major class of theoretical results come from work on equal percent bias reduction procedures (Rubin and Thomas, 1992, 1996).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Equal percent bias reduction means that a certain type of covariate matching will reduce bias in all dimensions of Z by the same amount.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Define a matching method to be affinely invariant if the matching procedure is invariant to affine transformations of the covariates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' If Z given T is assumed to have a so-called elliptically symmetric distribution, then Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' and Corollaries 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' and 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='2 of Rubin and Thomas (1992) apply so that any affinely invariant matching method will be equal percent bias reducing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Examples of elliptically symmetric distributions include the multivariate normal and t distributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' While elliptical symmetry of the confounders given treatment group is a restrictive assumption, this was relaxed in more recent work by Rubin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' (2006).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' There, they assumed that the conditional distribution of Z given T is a discriminant mixture of elliptically symmetric distributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Rubin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' (2006) prove that a generalization of equal percent bias reducing holds for this setup as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Thus, for equal percent bias reducing methods, we have a guarantee that attempting to increase balance in one variable will not lead to distortions in balance for other variables.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' However, the assumptions needed for equal percent bias reducing to hold seem restrictive in practice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Iacus et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' (2011) took another approach by focusing on in-sample covariate discrepancies and requiring that the maximum discrepancy in sample means between treated and control subjects be bounded above by a constant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' They generalize this to arbitrary functions of the data, which they term imbalance bounding and define monotonic imbalance bounding matching methods to be those in which the discrepancies between a monotonic function applied to a variable is bounded above by a confounder- specific term.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Thus, one can be more stringent in the balance in variable without impacting the maximal imbalance across all confounders.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' There are many important implications of requiring the monotonic imbalance bounding property.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' First, many methods of confounder adjustment, such as nearest-neighbor or caliper matching as defined in Cochran and Rubin (1973), are not monotonic imbalance bounding because they fix the number of treated and control observations within strata, while monotonic imbalance bounding methods imply variable numbers of observations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' By contrast, if the caliper matching procedure were to allow for different calipers for each confounder, then this would be monotonic imbalance bounding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Iacus et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' (2011) also show that a key goal in causal effect estimation is to reduce model dependence (Ho et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=', 2007), meaning that there should not be extrapolation of potential outcomes to regions in the covariate space where there are no observations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Under some assumptions on the model for potential outcomes, they show that for monotonic imbalance bounding methods, the model dependence is upper bounded by terms involving an imbalance parameter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' In addition, the estimation error for average causal effects using monotonic imbalance bounding matching methods can also be upper bounded by terms involving this parameter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' As a concrete example of a new monotonic imbalance bounding method, Iacus et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' (2011) propose a coarsened exact matching algorithm for creating strata.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' It proceeds as follows: 4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' For each variable Zj (j = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' , p), coarsen it into a function Cj(Zj) which takes on fewer values than the unique values of Zj;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Perform exact matching between treated and control observations using the vector (C1(Z1), C2(Z2), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' , Cp(Zp)) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' This effectively creates strata S1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' , SJ based on the unique combinations of (C1(Z1), C2(Z2), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' , Cp(Zp)) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Discard strata in which there are only observations with T = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' For strata with only observa- tions from the T = 1 population, extrapolate the potential outcome Y (0) using the available controls or discard by restricting the causal effect of interest on the treated units for which causal effect can be identified without further modelling based assumptions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' For strata with both treated and control observations, compare the outcome between the two populations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Iacus et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' (2011) have developed very easy-to-use software packages for implementing coarsened exact matching in R and Stata.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' They show that the coarsened exact matching approach satisfies the monotonic imbalance bounding property with respect to a variety of functionals of interest.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' In addition, they provide a very intuitive explanation for what coarsened exact matching attempts to mimic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' While classical propensity score approaches attempt to mimic a randomized study, analyses using coarsened exact matching will mimic randomized block designs, where the blocks are by definition predictive of the potential outcomes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' It is well-known that in this situation, randomized block designs will yield more efficient estimators (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=', Box, Hunter and Hunter, 1978).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' The other approach that has become of recent interest has been to incorporate covariate balance as part of the causal effect estimation process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' For example, Imai and Ratkovic (2014) propose using generalized methods of moments for causal effect estimation in which covariate balance is treated as a constraint in the procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Chan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' (2016) propose the use of calibration estimators for causal effect estimation in which covariate balance constraints lead to a constrained Lagrangian dual optimization problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' For these approaches, the authors are able to develop consistency and asymptotic normality results for the causal effect estimators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' As described in more detail in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='1, we will be using an integral probability metric to assess covariate balance among the two populations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' In Kallus (2020) a similar metric is used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' They define such a metric as the target error to be minimized for obtaining optimal weighting coefficients when estimating the sample average treatment effect on the treated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' While our approaches are complementary, there are several notable differences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' First, in Kallus (2020), they use their metric to find weights that correspond to known matching methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' The functions involved in their metric represent the expected relationship between potential outcomes and covariates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' In our case, we take any matching procedure and given the measure of match, bound it by the probability metric involving functions representing the matching procedure itself, and provide probability bounds to how good the matching is.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' In addition, in Kallus (2020), they assume a fixed population and therefore no randomness in covariate values, while our concern indeed focuses on the sample distribution of these covariates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' The difference between these two approaches is further explained in Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='3 Modes of inference and covariate balance In looking at the various proposals for accommodating covariate balance, it is useful to reconsider the ways in which one can perform causal inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Imbens and Rubin (2015) have a nice overview on the distinction between finite-population and superpopulation modes for causal inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' The finite-population mode of causal inference treats the sampled units as the population of interest.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' The stochastic nature of the experiment is due solely to the treatment mechanism so that randomness occurs only with respect to the treatment assignments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' If one adopts the finite-sample point of view 5 for causal inference, then one can use a randomization-based approach to performing inference for causal effects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' By contrast, the superpopulation mode of inference considers two sources of variability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' The first is due to the randomness in the treatment assignments, and the second is due to the fact that the sampling units are a random sample from a superpopulation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Thus, this approach posits a superpopulation from which the sampling units come from.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Revisiting the previous work from 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='2, the equal percent bias reduction theory and the work of Iacus et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' (2011) posit results about covariate balance assuming a finite-population mode for causal inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Thus, covariate balance results of these methods will involve subsampling and matching from the sampling units, and the balance occurs with respect to the matched sample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' The concept of balance we introduce in the next section can accommodate both modes of inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' 3 Main Results 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='1 Ideal Balance In this section, we wish to study covariate balance from the viewpoint of comparing the distributions L(Z | T = 0) and L(Z | T = 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' To do so, we must determine how this comparison is done.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' We do this by first defining probability pseudometrics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Definition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='1 (Pseudometric).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Let A be the set of probability measures defined on a shared mea- surable space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' A function m : A × A → [0, ∞) is a pseudometric on A if, for all µ, ν, λ ∈ A, the following conditions are satisfied: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' m(µ, µ) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' m(µ, ν) = m(ν, µ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' m(µ, ν) ≤ m(µ, λ) + m(λ, ν).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Note these properties almost make m a metric on A, but notably we do not assume that if the distance between two elements is zero, then the two elements are the same.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' For the purpose of this paper, we will abuse terminology and refer to pseudometrics as metrics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' The class of metrics we will work with in this article is given by γF(µ, ν) = sup f∈F ���� � fdµ − � fdν ���� , (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='1) where F is a class of functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' In (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='1), γF(µ, ν) is referred to by Zolotarev (1984) as an example of a probability metric.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' In our notation, we drop the dependency of γF on F and write it as γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' We now define ideal balance as being based on (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Definition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='2 (Ideal Balance).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Let µ and ν be distributions on the same probability space and m a pseudometric, then we say µ and ν satisfy Ideal Balance with respect to m if m(µ, ν) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' When µ and ν are the conditional distributions of the covariates given the treatment group, as in Section 2, ideal balance is a restriction on the population.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' If these are instead the empirical distributions of the data, ideal balance is a sample restriction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Matching methods, in a sense, intend to achieve ideal balance on the matched data for some m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Note that at this stage, we have only dealt with population distributional laws and have not described how to estimate or compute these quantities with real data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' In practice, we would not expect ideal balance to hold in observational studies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' However, it does serve as a useful benchmark through which we can study the behavior of various functional constraints.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Here, the function spaces F in (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='1) play the role of the constraints;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' more complex function spaces correspond to more constraints on the joint distributions of Z|T = 1 and Z|T = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' 6 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='2 A Concentration Inequality Result Let F be a function space and ∥ · ∥ a norm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' The covering number N(ϵ, F, ∥ · ∥) is the minimum number of ∥ · ∥-balls of radius ϵ needed to cover F, where a ball centered around f ∈ F is the set {g | ∥f − g∥ ≤ ϵ}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Intuitively, one can think of the covering number as a measure of the complexity of the function class F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' For a measure µ the norm Lr(µ)-norm, for r ≥ 1, is defined as ∥f∥r Lr(µ) = � |f|rdµ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Throughout the paper, we will assume F is uniformly bounded.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Note that if µ is any probability measure, and under uniform boundedness, we can endow F with the norm Lr(µ) without dropping any of its elements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Unless otherwise specified, we assume the range of the functions in F is [0, 1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Finally, for a function class F, an envelope function of F is defined as any function h such that for all f in F, the inequality |f(x)| ≤ |h(x)| is satisfied for any x.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Let {Zi}n i=1 be a sample where each Zi has distribution Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' We denote the empirical distribution by Qn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' The F-indexed empirical process GQ n is defined as the map taking any f ∈ F to GQ n (f) = √n �� fdQn − � fdQ � = 1 √n n � i=1 � f(Zi) − � fdQ � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Let Q0 n0 and Q1 n1 be two empirical distributions of observations sampled from Q0 and Q1, respectively, and assume ideal balance holds for Q0 and Q1 with respect to γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Let M be the collection of probability measures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' If there exists constants C and K such that F satisfies sup µ∈M N(ϵ, F, ∥ · ∥Lr(µ)) ≤ �K ϵ �C , for every 0 < ϵ < C, then Pr{γ(Q0 n0, Q1 n1) > δ} ≤ � Dδ 2 √ C �C � nC/2 0 exp(−n0δ2/2) + nC/2 1 exp(−n1δ2/2) � , (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='2) where D is a constant depending on K only.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' The proofs of Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='3 and subsequent results are found in the supplementary material.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Throughout the paper, we will use Bn(δ, D, C) for the bound in Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='3, where the subscript n reminds us of the dependence on the sample size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Remark 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' We note that the bound in (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='2) is nonasymptotic and will hold for any sample size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Remark 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' In this framework, the function classes play an important role.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='3 gives a bound in terms of the entropy number of the function class in question.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' In particular, low- complexity functions are favored using this approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' A key technical point is ensuring that the covering number condition in the theorem is satisfied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' To do so, we will primarily use results from Vapnik-Chervonenkis theory (Chervonenkis and Vapnik, 1971) to determine appropriate covering numbers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' In most cases the function classes of interest are not real-valued but vector-valued.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' The following straightforward results can be used to deal with these cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Let {Fi}d i=1 be a collection of real-valued function spaces and (P i, Qi) satisfy ideal balance under γFi for each 1 ≤ i ≤ d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Let (Pi, Qi) denote their respective empirical distributions with implicit sample size dependence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Then Pr � d � i=1 γFi(Pi, Qi) > δ � ≤ d � i=1 B(δ/d, Di, Ci).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' 7 Now, consider the collection {Fi}d i=1, where each Fi is a real-valued function space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Define F = {f = (f1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' , fd)T | fi ∈ Fi for all i}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Let πℓ be the ℓth coordinate projection, that is, for a finite dimensional vector x = (x1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' , xd), πℓ(x) = xℓ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Finally, define Fπ = {πℓ ◦ f | f ∈ F, 1 ≤ ℓ ≤ d}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Note the elements of Fπ are real-valued.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' The following lemma tells us we can either assume µ and ν satisfy ideal balance with respect to each of γFi, or that they satisfy ideal balance with respect to γFπ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Let F, {Fi}d i=1, and Fπ be as above, and let µ and ν denote two probability measures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Then the following are equivalent: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' µ and ν satisfy ideal balance with respect to γFπ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' µ and ν satisfy ideal balance with respect to each γFi, 1 ≤ i ≤ d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' maxi γFi(ν, µ) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' The following corollary will be very useful: Corollary 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Let F and Fπ be as above, and Fi = F∗ for all i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Assume F∗ has polynomial covering number.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Let {X0 j }n0 j=1 ∼ Q0 and {X1 j }n1 j=1 ∼ Q1, where Q0 and Q1 satisfy ideal balance with respect to γFπ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Fix f ∗ ∈ F, then Pr � � � ������ 1 n0 n0 � j=1 f ∗(X0 j ) − 1 n1 n1 � j=1 f ∗(X1 j ) ������ ℓp > δ � � � ≤ dB(δ/d1/p, D∗, C∗), for finite p ≥ 1, and Pr � � ������ 1 n0 n0 � j=1 f ∗(X0 j ) − 1 n1 n1 � j=1 f ∗(X1 j ) ������ ℓ∞ > δ � � ≤ dB(δ, D∗, C∗), where D∗, C∗ depend only on F∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Definition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='9 (Vapnik-Chervonenkis Dimension).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' The Vapnik-Chervonenkis dimension of a func- tion class F on an ambient set X is the cardinality of the largest subset shattered by F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' A function class F shatters a set S ∈ X if for each possible 0 − 1 labeling of the elements of S there is at least one function f ∈ F that realizes such labeling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' A key result we will use is an application of Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='7 of Van Der Vaart and Wellner (1996), which implies that if a function class G has finite Vapnik-Chervonenkis dimension v, then sup µ N(ϵ, G, L2(µ)) ≤ �K ϵ �C∗ , where C∗ = 2v − 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' 4 Examples 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='1 Balance on coarsened function classes Consider coarsened exact matching as described in Iacus et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' (2011).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Let Z0 = {Z0 i }n0 i=1 and Z1 = {Z1 j }n1 j=1 be the control and treatment samples, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' In coarsened exact matching we create a partition of the sample space and match samples which are found in the same element 8 of the partition, and discard samples in subsets without samples from the opposite group.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' We are interested in the quantity ∆ = 1 m0 � i∈M0 w0 i Z0 i − 1 m1 � j∈M1 w1 jZ1 j , where mℓ is the number of matched samples for the ℓth group, Mℓ is its index set, and {w0 i , w1 j}i∈M0,j∈M1 are weights.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' In the supplementary material we describe how to express this matching procedure as a function f on the variables Z0 i and Z1 j .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' This allows us to express ∆ in terms of f.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' We further specify the function space F for which ∥∆∥ ≤ γF(Q0 n0, Q1 n1) holds for an appropriate norm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Using the properties of F and provided the bound above, we can derive our results of interest: Pr(|∆k| ≥ δ) ≤ B(δ, D, C∗), for a constant C∗ and where ∆k is the kth component of ∆.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Similarly, Pr(∥∆∥ℓp ≥ δ) ≤ dB(δ/d1/p, D, C∗) and Pr(∥∆∥ℓ∞ ≥ δ) ≤ dB(δ, D, C∗).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='2 Covariate balance on the linear propensity score As discussed in Section 3, there has been a lot of work on developing matching results based on linear discriminant analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' That is, we assume that P(Z | T = ℓ) follows N(µℓ, Σ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Under this model, the metric for consideration is the logit of the propensity score (see Stuart (2010)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' In the supplementary material we show the distance |logit(e(Z)) − logit(e(Z′)| can be expressed in terms of the linear discriminant analysis hyperplance vector.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Indeed, if p is the dimension of the covariates, we can create a function space F derived from hyperplanes and with Vapnik-Chervonenkis dimension p + 1 such that ∆ = ������ 1 m0 � i∈M0 logit(e(Zi)) − 1 m1 � j∈M1 logit(e(Zj)) ������ ≤ γF(Q0 n0, Q1 n1), allowing us, using Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='3, to determine the bound of interest: Pr{∆ > δ} ≤ B(δ, D, 2p).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='3 Covariate balance using kernels Many authors (Hazlett, 2016;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Wong and Chan, 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Zhu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=', 2018) have advocated for the use of kernel methods for matching and evaluating covariate balance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' This corresponds to assuming that F in (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='1) represents a Reproducing Kernel Hilbert space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Further details about these function spaces can be found in the supplementary material.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' To apply Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='3 to the kernel setting, we will note there exists a version of linear discrimi- nant analysis from section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='2 that can be extended to the reproducing Kernel Hilbert Space setting (Baudat and Anouar, 2000).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Let H be a reproductive kernel Hilbert space and ∥ · ∥H the norm associated to it, then a natural metric to consider for a kernelized matching procedure would be ∆H = ������ 1 m0 � i∈M0 f(Zi) − 1 m1 � j∈M1 f(Zj) ������ H , 9 which represents a functional generalization of ∆ from Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='2, and where f ∈ H is an appropriate function chosen by the user.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Then ∆H ≤ γF(Q0 n0, Q1 n1), and we can use the previous results with a few adjustments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' We show in the supplementary material that P(∆H > δ) ≤ B(δ, D, C∗), where C∗ depends on the smoothness properties of H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' 5 Practical implementation So far, we have given theoretical results that describe how algorithms under various function classes behave under the ideal balance assumption.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' As noted earlier, the ideal balance definition is strict but permits theoretical characterization of various algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' The question then naturally arises as to how to use the theoretical results from the previous sections in practice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Note one can view the metric in equation (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='1) as a multivariate balance metric, which differ- entiates it from many other balance metrics in the literature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Zhu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' (2018) used (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='1), where F is a reproducing kernel Hilbert space, as a covariate balance diagnostic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' There, they found that in certain situations, the diagnostic was more sensitive in finding covariate imbalances relative to univariate diagnostics as well as those based on the prognostic score (Hansen, 2008).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Consider the problem of estimating the average causal effect among the treated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' In practice, it is unlikely that ideal balance will hold for the treatment and control populations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' That is to say, γF � Q0, Q1� ̸= 0, unless treatment is randomized.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Therefore, we wouldn’t be able to use Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='3 in an observational study.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' However, a slight modification can be done for which the analysis remains largely the same.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Let w ∈ W ⊂ Rn0 be a weight vector and define Q0 w = 1 � i:Ti=0 wi � i:Ti=0 wiδXi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' The majority of methods in causal inference have as a goal to find appropriate weights w for which Q0 w converges to Q∗ for some distribution Q∗ that indeed satisfies ideal balance with Q1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' That is, for which γF � Q∗, Q1� = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' In order for this modification to be feasible, we just need to modify our proof of Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='3 and include the convergence rates of Q0 w to Q∗, which may change depending on the problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Having done so, we continue in a parallel manner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Let f ∗ ∈ F represent a matching procedure with balance diagnostic ∆ = ���� � fdQ0 w − � fdQ1 n1 ���� , then, by the definition of γF, ∆ ≤ γF � Q0 w, Q1 n1 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Therefore, if we can find weights for which Q0 w converges to Q∗ and γF(Q∗, Q1) = 0, then we can bound the probability that ∆ exceeds some threshold δ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' There are many methods for finding w ∈ W, the most straightforward being the inverse proba- bility of treatment weights, wi = Ti + e(Zi)(1 − Ti) 1 − e(Zi) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Even heavily prescribed matching algorithms that are found throughout the causal inference litera- ture find some weights w ∈ W as described by Abadie and Imbens (2006).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' In one-to-one matching with replacement, let J (i) = {j1(i), j2(i), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='} be the set of indices of units that are matched with 10 the unit i = 1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' , n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' If there are no ties, then J (i) = j(i).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' With ties present, which occur fre- quently especially with exact matching (see coarsened exact matching), J (i) might contain multiple matched indices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' The matching process will allow us to produce weights for every unit by solving wi = � {l:Tl=1} I[i ∈ J (l)] #J (l) for all i ∈ {i : Ti = 0} where #J (i) denotes the cardinality of J (i).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' 6 Simulation Studies We perform a simulation study to evaluate the distribution of the distances reported in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' We also examine their downstream consequences for estimating average treatment effects on the treated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' There are two data generating mechanisms that we consider.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' In addition, we vary the sample size and the variance of the responses for a total of eight scenarios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' We replicate each of these scenarios, described below, over 1000 iterations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' We report the mean and Monte Carlo standard errors of the three distances (∆) examined in Section 4 (Table 1) along with the kernel density estimates for one representative scenario (Figure 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' We also evaluate the downstream effects of these ∆ statistics on the average treatment effect using one-to-one matching methods described by Abadie and Imbens (2006) implemented in the Matching package (Sekhon, 2008) (Tables 2 and 6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' For i = 1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' , n, let Zi1 ∼ N(1, 4), Zi2 ∼ Bin(1, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='3), Zi3 ∼ N(0, 1), and Zi4 ∼ Bin(1, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='5) where Ti denotes the binary treatment assignment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' The conditional means of the outcomes for the treated, µ1(Zi), and the controls, µ0(Zi), are constructed as µ0(Zi) = 10 − 3Zi1 − Zi2 + Zi3 + 3Zi4 and µ1(Zi) = µ0(Zi) + 5 + 3Zi1 − Zi2 + Zi3 − 3Zi4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' (6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='1) We sample Ti ∼ Bin(1, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='5) distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' For i = 1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' , n, we sample the counterfactual responses Yi(1) ∼ N[µ1(Zi), σ2] and Yi(0) ∼ N[µ0(Zi), σ2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' The observed outcome is Yi = TiYi(1) + (1 − Ti)Yi(0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' We will refer to these conditions with the label “baseline”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' For the error variance, we set σ2 ∈ {5, 10}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' For the scenario labeled “sparse”, we include an additional set of covariates that ultimately do not affect the outcome.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' The outcomes are determined by the potential outcome models in (6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='1), yet the methods we consider also account for the noise covariates Zi5 ∼ N(−1, 4), Zi6 ∼ Bin(1, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='7), Zi7 ∼ N(0, 1), and Zi8 ∼ Bin(1, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' As mentioned before, we test the three examples described in Section 4 in their ability to produce efficient, unbiased estimates of the average treatment effect of the treated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Linear discriminant analysis sets f to be the logit transformation of the fitted posterior probability that each unit receives treatment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' The support vector machine examples use the distance that each point is from the resulting separating hyperplane assuming a linear kernel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Coarsened exact matching is performed similar to what is described in Iacus et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' (2011) and is implemented with the cem R package.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Table 1 shows the results of our simulation experiment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Since balance is already achieved through randomization in this simulation, we also report the unmatched, crude estimate of the average causal effect for references.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Here the value ∆ is the maximum absolute sample mean difference for the unweighted covariates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' The values ∆ are not necessarily directly comparable in this example.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' They do represent the distributions whose tail probabilities we are bounding in theorem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' The simulation serves to char- acterize some of the densities of these statistics so that we might better understand which values of δ are acceptable for the different balance methods in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' We see that the values for ∆ after coarsened exact matching were the most heavily concentrated, followed closely by the values 11 n σ2 Scenario θ A B C D 1000 5 baseline 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='11 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='07) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='03 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='02) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='02 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='01) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='09 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='04) 1000 5 sparse 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='15 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='07) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='01 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='01) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='03 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='02) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='13 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='05) 1000 10 baseline 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='12 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='07) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='03 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='02) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='02 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='01) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='09 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='05) 1000 10 sparse 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='15 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='07) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='01 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='01) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='03 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='02) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='13 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='05) 2000 5 baseline 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='08 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='05) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='02 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='01) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='01 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='01) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='06 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='03) 2000 5 sparse 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='11 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='05) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='01 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='01) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='02 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='01) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='09 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='04) 2000 10 baseline 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='08 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='05) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='02 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='01) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='01 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='01) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='06 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='03) 2000 10 sparse 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='11 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='05) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='01 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='01) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='02 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='01) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='09 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='04) Table 1: Average and Monte Carlo standard error of ∆ found in the experiment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' In this table, Method A is the unweighted estimate, Method B refers to coarsened exact matching, Method C to linear discriminant analysis, and Method D to support vector machines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Since both A and B create a vector valued ∆ we report the maximum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' generated by linear discriminant analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' The balance diagnostics from a support vector machine and from an unweighted comparison yielded considerably more dispersed values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' One point of direct comparison that we may take between the different ∆ estimates is the downstream effects of the various balancing methods with estimating the average treatment effect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' The purpose of this portion of the simulation study shows how the concentration of the distribution for ∆ may have little to do with the actual quality of the average treatment effect estimates - the ultimate result for causal inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Although the concentration of the distribution for ∆ under coarsened exact matching was the most narrow among the other densities found for ∆ under linear discriminant analysis and support vector machines, the estimated average treatment effect is also the most biased.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' The Monte Carlo standard errors also seem to be greater than the other two balance methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Linear discriminant analysis also conferred a narrow concentration of ∆ statistics yet produced the most efficient estimates of the average treatment effect, other than from the unweighted estimate which had the smallest Monte Carlo standard errors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' This result is interesting because the unweighted diagnostics had the most dispersed values for ∆.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' This leads us to believe that the scale of the ∆ statistics must be carefully considered while evaluating balance to make some determination on which method is most suitable for evaluating treatment effects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' n σ2 Scenario θ A B C D 1000 5 baseline 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='2 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='20 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='33) 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='24 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='33) 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='20 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='42) 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='20 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='36) 1000 5 sparse 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='2 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='20 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='34) 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='29 (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='24) 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='21 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='45) 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='20 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='39) 1000 10 baseline 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='2 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='20 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='37) 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='22 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='40) 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='20 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='47) 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='20 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='42) 1000 10 sparse 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='2 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='19 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='35) 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='31 (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='46) 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='20 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='46) 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='22 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='42) 2000 5 baseline 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='2 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='19 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='24) 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='21 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='24) 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='20 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='29) 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='20 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='25) 2000 5 sparse 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='2 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='20 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='23) 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='34 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='71) 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='21 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='29) 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='21 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='26) 2000 10 baseline 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='2 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='21 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='25) 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='21 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='26) 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='19 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='32) 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='21 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='28) 2000 10 sparse 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='2 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='21 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='25) 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='38 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='79) 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='21 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='31) 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='21 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='27) Table 2: Summary of simulation estimates and Monte Carlo standard errors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' The simulation sce- narios corresponding to ”baseline” and ”sparse” are described in further detail in Section 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Here, θ refers to the population average treatment effect among the treated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' In this table, Method A is the unweighted estimate, Method B refers to coarsened exact matching, Method C is linear discriminant analysis, and Method D is support vector machines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' 12 Figure 1: Kernel Densities of the ∆ balancing statistics for the baseline scenario with n = 1000 and σ2 = 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' The solid line is the distribution from the unweighted estimates, the dashed line is the distribution for coarsened exact matching, the dotted line is the distribution for the linear propensity score, and the dotted-dashed line for the support vector machine examples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' n σ2 Scenario θ A B C D 1000 5 baseline 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='952 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='937 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='941 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='929 1000 5 sparse 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='944 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='955 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='934 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='917 1000 10 baseline 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='941 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='918 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='935 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='912 1000 10 sparse 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='955 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='950 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='951 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='931 2000 5 baseline 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='931 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='945 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='937 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='923 2000 5 sparse 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='956 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='945 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='939 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='918 2000 10 baseline 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='959 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='936 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='926 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='928 2000 10 sparse 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='953 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='946 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='948 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='935 Table 3: Summary of coverage probabilities from the simulation experiment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' The simulation scenar- ios corresponding to ”baseline”, ”interaction”, ”positivity”, and ”sparse” are described in further detail in Section 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Here, θ refers to the population average treatment effect among the treated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' In this table, Method A is the unweighted estimate, Method B refers to coarsened exact matching, Method C to linear discriminant analysis, and Method D to support vector machines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Acknowledgments The authors would like to acknowledge funding support from the following sources: the National Institutes of Health, the National Science Foundation, the Veterans Administration and the Grohne- 13 Kernel Densities of Delta from a Monte-Carlo Simulation 8 4 8 Density 11, :t : 1 : 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='5 DeltaStepp Endowment from the University of Colorado Cancer Center.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Appendix Proof of theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='3 We will use P and Q instead of Q0 and Q1 to ease symbolic burden on the reader.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' By definition of γ: γ(Pn0, Qn1) = sup f∈F ���� � fdPn0 − � fdQn1 ���� = sup f∈F ���� � fdPn0 ± � fdP ± � fdQ − � fdQn1 ���� ≤ sup f∈F ���� � fdPn0 − � fdP − � fdQn1 + � fdQ ���� + sup f∈F ���� � fdP − � fdQ ���� = sup f∈F ���� � fdPn0 − � fdP − � fdQn1 + � fdQ ���� , since γ(P, Q) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Using elementary probability arguments, we have Pr{γ(Pn0, Qn1) > δ} = Pr � sup f∈F ���� � fdPn0 − � fdP − � fdQn1 + � fdQ ���� > δ � = Pr � sup f∈F ���� 1 √n0 GP n0(f) − 1 √n1 GQ n1(f) ���� > δ � ≤ Pr � sup f∈F |GP n0(f)| > √n0δ/2 � + Pr � sup f∈F |GQ n1(f)| > √n1δ/2 � , where GP n0(f) and GQ n1(f) represent the F-indexed empirical processes of P and Q, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Applying Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='9 in Van Der Vaart and Wellner (1996), we can bound each of the terms as follows: Pr � sup f∈F |GP n0(f)| > √n0δ/2 � < �D√n0δ 2 √ C �C exp(−n0δ2/2) Pr � sup f∈F ��GQ n1(f) �� > √n1δ/2 � < �D√n1δ 2 √ C �C exp(−n1δ2/2), where D is a constant depending only on K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Plugging these two bounds into (6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='2) concludes the proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' 14 Proof of Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='6 Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Define γi = γFi(Pi, Qi).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Then: Pr �� i γi > δ � = 1 − Pr �� i γi < δ � ≤ 1 − Pr(γi < δ/d ∀i) = Pr(∃ i ∋ γi > δ/d) ≤ � i Pr(γi > δ/d) ≤ � i B(δ/d, Di, Ci), where we have used the union bound in the second inequality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Proof of Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='7 Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Assume γFi(µ, ν) = 0 for all i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Then γFπ(µ, ν) = sup f π∈Fπ ���� � f πdµ − � f πdν ���� = max ℓ sup f∈F ���� � πℓ ◦ fdµ − � πℓ ◦ fdν ���� = max ℓ sup f∈F ���� � fℓdµ − � fℓdν ���� = max ℓ sup fℓ∈Fℓ ���� � fℓdµ − � fℓdν ���� = max ℓ γFℓ(µ, ν) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Conversely, assuming γFπ(µ, ν) = 0 yields γFi(µ, ν) = sup fℓ∈Fℓ ���� � fℓdµ − � fℓdν ���� = sup f∈F ���� � πℓ ◦ fdµ − � πℓ ◦ fdν ���� ≤ max ℓ sup f∈F ���� � πℓ ◦ fdµ − � πℓ ◦ fdν ���� = γFπ(µ, ν) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' This proves the first two equivalences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' The third one is a byproduct of the proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' 15 Proof of Corollary 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='8 Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' To avoid cumbersome notation,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' let v = 1 n0 �n0 j=1 f ∗(X0 j ) − 1 n1 �n1 j=1 f ∗(X1 j ) and note vℓ = 1 n0 �n0 j=1 f ∗ ℓ (X0 j ) − 1 n1 �n1 j=1 f ∗ ℓ (X1 j ),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' then: Pr � ∥v∥ℓp > δ � = Pr � ∥v∥p ℓp > δp� = Pr �� ℓ |vℓ|p > δp � ≤ Pr �� ℓ γFℓ(Q0 n0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Q1 n1)p > δp � ≤ � ℓ Pr � γFℓ(Q0 n0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Q1 n1)p > δp/d � = � ℓ Pr � γFℓ(Q0 n0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Q1 n1) > δ/d1/p� ≤ � ℓ B(δ/d1/p,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' D∗,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' C∗) = dB(δ/d1/p,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' D∗,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' C∗),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' where the second and third inequalities follow from a slight variation of Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='6 and application of Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' For the ℓ∞ case we have: Pr � ∥v∥ℓ∞ > δ � ≤ Pr � max ℓ |γℓ| > δ � ≤ � ℓ B(δ, D∗, C∗), concluding the proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Balance for coarsening functions We will show the coarsened exact matching procedure belongs to a class of functions with tractable Vapnik-Chervonenkis dimension.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Consider the set S of partitions with a fixed number of elements R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' For a given partition S ∈ S, such that S = {s1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' , sR} define f kα S to be: f kα S (x) = R � i=1 kiαiχsi(x), where ki ≤ k for k a constant, χsi is the indicator function of si, and α := (α1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' , αR) is a binary vector, this is, αi ∈ {0, 1} for each i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' In words, if x is found in si, f will return a scaled version of x if αi is 1 and zero otherwise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Now let F := {f kα S }S∈S,α∈A,k≤κ, where A is the set of all binary vectors of size R and κ ∈ R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Hence, the coarsened exact matching procedure belongs to this class of functions, since in that case αi indicates if there are at least two members of different groups in stratum si.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' For any sample point x, the weights are usually chosen in the following manner: If x is a treated unit, w1 i = 1, otherwise, w0 i = (ms 1/m1)/(ms 0/m0), where s is the stratum x belongs to.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Letting ki = wℓ inℓ/mℓ appropriately weighs matched samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' We just need to add the mild assumption that the ratio of sample to matched size per stratum s does not grow faster than √κ, that is, nℓ/ms ℓ ≤ √κ for all s ∈ S, because in that case w0 i ≤ m0/ms 0 ≤ n0/ms 0 ≤ √κ and nℓ/mℓ ≤ √κms ℓ/mℓ ≤ √κ, so ki ≤ κ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Finally, notice that any similar function with a smaller partition size can be expressed by a function in F, so we can consider variable partition size as long as it does not exceed a reasonable bound R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' 16 For any set of points of size R there is a partition S containing one point in a different element, and therefore an α that can assign each point arbitrarily to either 0 or 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' So F shatters such set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' However, if we add an extra point, and since the number of partitions is constrained, it would have to share partition element with a previous point, and so assignment under f kα s .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' So the Vapnik- Chervonenkis dimension of F is R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Finally, let g(Zℓ) = Qℓ nℓ, where Qℓ nℓ is the empirical distribution of the sample Zℓ for group ℓ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Let k∗ be chosen as above and let (S∗, α∗) be the particular partition and binary vector used for coarsened exact matching.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Then, for the ℓth component we get: ������ 1 m0 � i∈M0 w0 i Z0 i,ℓ − 1 m1 � j∈M1 w1 jZ1 j,ℓ ������ = ������ 1 n0 n0 � i=1 f k∗α∗ S∗,ℓ (Z0 i ) − 1 n1 n1 � j=1 f k∗α∗ S∗,ℓ (Z1 j ) ������ ≤ sup fℓ∈F∗ ������ 1 n0 n0 � i=1 fℓ(Z0 i ) − 1 n1 n1 � j=1 fℓ(Z1 j ) ������ = γF∗(Q0 n0, Q1 n1) = γF∗(g(Z0), g(Z1)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Thus, the discrepancy among the matched samples per dimension is bounded by the γF∗ distance of the unmatched samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Finally, the function h(x) := κx is an envelope function of F and has norm ∥h∥L2(µ) < ∞ as long as we assume compact domain, which is OK to do for most coarsened exact matching cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Then, by Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='7 of Van Der Vaart and Wellner (1996): sup µ N(ϵ, F, L2(µ)) ≤ �K ϵ �C∗ , for some constant K and where C∗ = 2(R − 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' This leads us to our final result: Assume ideal balance on the population probabilities holds for γFπ, then, for the ℓth component we have: Pr � � ������ 1 m0 � i∈M0 w0 i Z0 i,ℓ − 1 m1 � j∈M1 w1 jZ1 j,ℓ ������ > δ � � ≤ B(δ, D, C∗).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' If we are interested in the ℓp norm of the full vector instead, then, by Corollary 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='8: Pr � � � � � ������ 1 m0 � i∈M0 w0 i Z0 i − 1 m1 � j∈M1 w1 jZ1 j ������ ℓp > δ � � � � � ≤ dB(δ/d1/p, D, C∗), for finite p ≥ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' While Pr � � � ������ 1 m0 � i∈M0 w0 i Z0 i − 1 m1 � j∈M1 w1 jZ1 j ������ ℓ∞ > δ � � � ≤ dB(δ, D, C∗).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Balance using propensity scores Recall e(Z) = P(T = 1 | Z), and that we are assuming Z | T = ℓ ∼ N(µℓ, Σ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Let pℓ be the probability density function of N(µℓ, Σ), that is, the gaussian density, then by the density version of Bayes’ Theorem we have p(T = 1 | Z = z) = p1P(T = 1) p1P(T = 1) + p0P(T = 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' 17 Therefore, we can express the logit of e(Z) as logit(e(Z)) = log � e(Z) 1 − e(Z) � = log �p1P(T = 1) p0P(T = 0) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Now define Lk := logit(e(Zk)), then the matching procedure is based on the difference |Li − Lj|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Given the above computation and after a few straightforward steps we get |Li − Lj| = ��(µ1 − µ0)T Σ−1(Zi − Zj) �� = |f ∗(Zi) − f ∗(Zj)| , where f ∗(x) = wT x for w ∈ Rp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Notice the vector w is the same as the one used for linear discriminant analysis so, adding an offset parameter, it will be useful to think of f ∗ as a hyperplane.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Let M j 0 be the control units assigned to treatment unit j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' We make the assumption that there is a fixed number of assigned controls to each treatment, and so m0 = |M j 0|m1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Then ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='∆ := ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='m1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='j∈M1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='logit(ej) − 1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='m0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='i∈M0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='logit(ei) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='= ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='m1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='j∈M1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='Lj − ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='j∈M1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='m0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='i∈M j ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='Li ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='= ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='j∈M1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='� 1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='m1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='Lj − 1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='m0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='i∈M j ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='Li ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='= ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='j∈M1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='� 1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='m1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='i∈M j ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='Lj ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='|M j ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='0| ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='− 1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='m0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='i∈M j ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='Li ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='= ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='j∈M1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='i∈M j ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='Lj ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='m1|M j ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='0| ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='− Li ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='m0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='= ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='j∈M1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='i∈M j ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='m0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='(Lj − Li) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='= ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='j∈M1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='i∈M j ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='m0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='(f ∗(Zj) − f ∗(Zi)) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='= ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='m1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='j∈M1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='f ∗(Zj) − 1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='m0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='i∈M0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='f ∗(Zi) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' That is, we can express the difference of means of logits in terms of the difference of means of the discriminant functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Let p be the dimension of the covariates, and let F be the collection of p-dimensional hyperplanes, notice f ∗ ∈ F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' The Vapnik-Chervonenkis dimension of F is known to be p + 1 (Mohri et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=', 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' We would like to bound ∆ in terms of γ but we first need some adjustments to f ∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' The matching procedure determines a set ZM = {Zk | k ∈ M} of matched samples, where M = M0 ∪ M1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' By the Gaussian assumption the Zs are sampled from a Gaussian mixture so the probability of two sample points being the same is zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Hence there is an ϵ > 0 such that for all 18 k ∈ M, Z ∩ Bϵ(Zk) = {Zk}, that is, each ϵ ball centered around a matched sample does not contain any other sample point (here Z is the sample set).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Let Sϵ = ∪kBϵ(Zk).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Note Sϵ is a measurable set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Let βSϵ(x) := xχSϵ(x), this function maps points to zero if unmatched and to themselves if matched.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Furthermore, let βℓ(x) := mℓ nℓ χMℓ(x) + χM C ℓ (x), for ℓ ∈ {0, 1}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Each βℓ scales elements in Mℓ by the factor mℓ nℓ and leaves the rest untouched.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Notice f ∗ M := f ∗ ◦ β1 ◦ β0 ◦ βSϵ sends Zk to mℓ nℓ wT Zk if k ∈ Mk and to 0 otherwise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Then we can express ∆ as ∆ = ������ 1 m1 � j∈M1 f ∗(Zj) − 1 m0 � i∈M0 f ∗(Zi) ������ = ������ 1 n1 n1 � j=1 f ∗ M(Zj) − 1 n0 n0 � i=1 f ∗ M(Zi) ������ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Now, consider the set FM := {f ◦β1◦β0◦βS|f ∈ F, S ∈ Σ}, where Σ is the set of measurable sets according to the distribution of the Zs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' The Vapnik-Chervonenkis dimension for FM is the same as that of F, that is, p + 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' To see this we notice that the standard derivation for the hyperplane case involves shattering the standard basis B in Rp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' With probability one, no sample point will equal a standard basis vector, so there is an ϵ′ > 0 for which we can create a set s = ∪x∈BBϵ′(x) such that s ∈ Σ and no sample point is in s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Considering the functions {fν} in F used to shatter B and using s, we can use the functions {fν ◦ β1 ◦ β0 ◦ βs} in FM to also shatter B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' So the Vapnik-Chervonenkis dimension is at least p + 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Since the functions β1, β0, and βS are either zero or a scaled identity, we don’t get any complexity and the dimension is no larger than p + 1, so it is indeed p + 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' For the envelope function, we can choose h(x) =< we, x >.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' The norm of we must be large enough to keep a p + 1 Vapnik-Chervonenkis dimension.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Since the vectors used to ensure such a dimension have norm p + 1, the norm of we must be at least p + 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' So we can choose any large constant C > p + 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Since we are interested in vectors of the form w = Σ−1∆µ, we have ∥w∥ ≤ ∥S−1∥F ∥∆µ∥2, so the user has to choose constants that bound each of these norms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Also, we must assume the covariates themselves are bounded, this ensures a finite norm for h.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Finally, we have ∆ = ������ 1 n1 n1 � j=1 f ∗ M(Zj) − 1 n0 n0 � i=1 f ∗ M(Zi) ������ ≤ sup f∈FM ������ 1 n1 n1 � j=1 f(Zj) − 1 n0 n0 � i=0 f(Zi) ������ = γFM (Q0 n0, Q1 n1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Assuming Ideal Balance on the population probabilities, and applying Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='7 of Van Der Vaart and Wellner (1996) in conjunction with Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='3, yields Pr{∆ > δ} ≤ B(δ, D, 2p).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Covering number bound for Reproducing Kernel Hilbert Spaces We refer the reader to Wahba (1990);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Berlinet and Thomas-Agnan (2011);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Steinwart and Christmann (2008) for nice overviews on reproducing kernel Hilbert spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Roughly speaking, a mapping k : X × X → R is said to be the reproducing kernel associated to the reproducing kernel Hilbert space H if it satisfies the following properties: (a) k(·, x) ∈ H for any x ∈ X;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' (b) f(x) = ⟨f, k(·, x)⟩H for all f ∈ H and x ∈ X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Property (b) is commonly referred to as the reproducing property.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' 19 To apply Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='3 to the reproducing kernel case, we will need to directly bound the covering number based on arguments different from Vapnik-Chervonenkis theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Define the space Hm q (Rp) = {f ∈ Lq(Rp) | Djf ∈ Lq(Rp) ∀j ∈ {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' , m};' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' ∥f∥q < ∞}, where ∥f∥q = � 0≤|α|≤s ∥Dαf∥Lq and Dα denotes partial derivatives in the sense of distributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Then as a consequence of Theorem 1 of Nickl and P¨otscher (2007), if m − q/p > 0, then N(ϵ, H, ∥ · ∥q) ≤ b1ϵ−q, while if m − q/p < 0, N(ϵ, H, ∥ · ∥q) ≤ b2ϵ−p/m, Based on this result, Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='3 can then be applied to prove a convergence rate under ideal balance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Note that this does not cover the Gaussian kernel case, because the Gaussian kernel is infinitely differentiable, so the space Hm q (Rp) does not apply.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' For the reader interested in the Gaussian case, we refer them to the recent paper by Steinwart and Fischer (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' References Abadie, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Imbens (2006).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Large sample properties of matching estimators for average treatment effects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Econometrica 74(1), 235–267.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Abadie, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Imbens (2011).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Bias-corrected matching estimators for average treatment effects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Journal of Business & Economic Statistics 29(1), 1–11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Abadie, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Imbens (2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Matching on the estimated propensity score.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Economet- rica 84(2), 781–807.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Baudat, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Anouar (2000).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Generalized discriminant analysis using a kernel approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Neural computation 12(10), 2385–2404.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Berlinet, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Thomas-Agnan (2011).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Reproducing kernel Hilbert spaces in probability and statistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Springer Science & Business Media.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Chan, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=', S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Yam, and Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Zhang (2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Globally efficient non-parametric inference of average treatment effects by empirical balancing calibration weighting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Journal of the Royal Statistical Society: Series B (Statistical Methodology) 78(3), 673–700.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Chervonenkis, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' and V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Vapnik (1971).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Uniform convergence of the frequencies of occurrence of events to their probabilities(uniform convergence of frequencies of events in independent tests sequence to probabilities of occurrence).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Teoriia Veroiatnostei I Ee Primeneniia 16, 264–279.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Hainmueller, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' (2012).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Entropy balancing for causal effects: A multivariate reweighting method to produce balanced samples in observational studies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Political Analysis 20(1), 25–46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Hansen, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' (2008).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' The prognostic analogue of the propensity score.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Biometrika 95(2), 481–488.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Hazlett, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' (2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Kernel balancing: A flexible non-parametric weighting procedure for estimating causal effects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Ho, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=', K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Imai, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' King, and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Stuart (2007).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Matching as nonparametric preprocessing for reducing model dependence in parametric causal inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Political analysis 15(3), 199–236.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' 20 Holland, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' (1986).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Statistics and causal inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Journal of the American statistical Associa- tion 81(396), 945–960.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Iacus, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=', G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' King, and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Porro (2011).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Multivariate matching methods that are monotonic imbalance bounding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Journal of the American Statistical Association 106(493), 345–361.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Imai, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Ratkovic (2014).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Covariate balancing propensity score.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Journal of the Royal Statistical Society: Series B (Statistical Methodology) 76(1), 243–263.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Imbens, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Rubin (2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Causal inference in statistics, social, and biomedical sciences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Cambridge University Press.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Kallus, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Generalized optimal matching methods for causal inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Journal of Machine Learning Research 21(62), 1–54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Kosorok, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' (2007).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Introduction to empirical processes and semiparametric inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Springer Science & Business Media.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Mohri, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=', A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Rostamizadeh, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Talwalkar (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Foundations of machine learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' MIT press.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Neyman, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' (1923).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Sur les applications de la th´eorie des probabilit´es aux experiences agricoles: Essai des principes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Roczniki Nauk Rolniczych 10, 1–51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Nickl, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' P¨otscher (2007).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Bracketing metric entropy rates and empirical central limit theorems for function classes of besov-and sobolev-type.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Journal of Theoretical Probability 20(2), 177–199.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Rosenbaum, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Rubin (1983).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' The central role of the propensity score in observational studies for causal effects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Biometrika 70(1), 41–55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Rubin, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' (1974).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Estimating causal effects of treatments in randomized and nonrandomized studies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Journal of educational Psychology 66(5), 688.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Rubin, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' (1976).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Multivariate matching methods that are equal percent bias reducing, i: Some examples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Biometrics, 109–120.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Rubin, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=', E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Stuart, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' (2006).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Affinely invariant matching methods with discriminant mixtures of proportional ellipsoidally symmetric distributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' The Annals of Statistics 34(4), 1814–1826.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Rubin, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Thomas (1992).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Affinely invariant matching methods with ellipsoidal distribu- tions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' The Annals of Statistics, 1079–1093.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Salimi, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Suciu (2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Zaliql: A sql-based framework for drawing causal inference from big data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' arXiv preprint arXiv:1609.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content='03540.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Sekhon, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' (2008).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Multivariate and propensity score matching software with automated balance optimization: the matching package for r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Journal of Statistical Software, Forthcoming.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Steinwart, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Christmann (2008).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Support vector machines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Springer Science & Business Media.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Steinwart, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Fischer (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' A closer look at covering number bounds for gaussian kernels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Journal of Complexity, 101513.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Stuart, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' (2010).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Matching methods for causal inference: A review and a look forward.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Statistical science: a review journal of the Institute of Mathematical Statistics 25(1), 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' 21 Van Der Vaart, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Wellner (1996).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Weak convergence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' In Weak convergence and empirical processes, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' 16–28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Springer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Wahba, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' (1990).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Spline Models for Observational Data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Society for Industrial and Applied Math- ematics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Wang, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=', M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Morucci, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Awan, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Liu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Roy, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Rudin, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Volfovsky (2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Flame: A fast large-scale almost matching exactly approach to causal inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Wang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Zubizarreta (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Minimal dispersion approximately balancing weights: asymp- totic properties and practical considerations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Biometrika.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Wong, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Chan (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Kernel-based covariate functional balancing for observational studies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Biometrika 105(1), 199–213.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Zhu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=', J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Savage, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Ghosh (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' A kernel-based metric for balance assessment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Journal of causal inference 6(2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Zolotarev, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' (1984).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Probability metrics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Theory of Probability & Its Applications 28(2), 278–302.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Zubizarreta, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' (2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Stable weights that balance covariates for estimation with incomplete outcome data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' Journal of the American Statistical Association 110(511), 910–922.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
+page_content=' 22' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'}
diff --git a/bdE4T4oBgHgl3EQfPQyn/content/tmp_files/2301.04972v1.pdf.txt b/bdE4T4oBgHgl3EQfPQyn/content/tmp_files/2301.04972v1.pdf.txt
new file mode 100644
index 0000000000000000000000000000000000000000..fbe496b9039f101ae25d88840a350d73c9fef218
--- /dev/null
+++ b/bdE4T4oBgHgl3EQfPQyn/content/tmp_files/2301.04972v1.pdf.txt
@@ -0,0 +1,1474 @@
+Prepared for submission to JHEP
+CERN-TH-2023-005
+Isospin Mass Differences of the B, D and K
+Matthew Rowe,1 Roman Zwicky1,2
+1Higgs Centre for Theoretical Physics, School of Physics and Astronomy, University of Edinburgh,
+Edinburgh EH9 3JZ, Scotland
+2Theoretical Physics Department, CERN, Esplanade des Particules 1,
+Geneva CH-1211, Switzerland
+E-mail: m.j.rowe@sms.ed.ac.uk, roman.zwicky@ed.ac.uk
+Abstract: We compute the electromagnetic mass difference for the B-, D- and K-mesons
+using QCD sum rules with double dispersion relations. For the B- and D-mesons we also
+compute the linear quark mass correction, whereas for the K the standard soft theorems
+prove more powerful. The mass differences, which have not previously been computed via
+a double dispersion, are fully consistent with experiment, albeit with large uncertainties.
+Contents
+1
+Introduction
+1
+2
+Electromagnetic Mass Difference ∆mH|QED from QCD Sum Rules
+3
+2.1
+B- and D-meson with Pseudoscalar Operators
+3
+2.1.1
+Numerics
+5
+2.2
+K-meson with Axial Operators
+6
+3
+Linear Quark Mass Correction ∆mH|mq
+8
+3.1
+QCD Sum Rule Computation of ⟨ ¯H|¯qq| ¯H⟩ for H = B, D
+8
+3.1.1
+Numerics
+9
+3.2
+SU(3)F estimates of ⟨ ¯H|¯qq| ¯H⟩ for H = B, D
+10
+3.3
+Soft Goldstone estimate of ⟨L|¯qq|L⟩ for L = π, K
+10
+4
+Final Overview and Conclusions
+11
+A Variants of Quark-Hadron Duality
+12
+A.1 Weight function ω(s) = s
+13
+A.2 Weight function ω(s) =
+1
+s−η
+14
+B Numerical Input
+14
+B.1
+Decay constants fB, fD and fK
+14
+arXiv:2301.04972v1 [hep-ph] 12 Jan 2023
+
+C Self Energies and Condensates for ∆mH|QED
+14
+C.1 Perturbation theory
+15
+C.2 Condensates
+16
+D Some Classic Results
+16
+D.1 Linear quark mass dependence from Feynman-Hellman theorem
+16
+D.2 ∆mπ|QED from soft theorem and Weinberg sum rules
+16
+1
+Introduction
+The mass difference of charged and neutral hadrons,
+∆mH = mH+ − mH0 ,
+H = B, D, K, π, p ,
+(1.1)
+is an isospin breaking effect and has intrigued particle physicists from the very beginning.
+In particular the proton-neutron [1] and the π+-π0 [2] mass difference have been discussed
+extensively. At the microscopic level ∆mH is driven by differences in the electric charge
+and the mass mq of the hadron’s light valence quark q = u, d
+∆mB = ∆mB|QED + ∆mB|mq .
+(1.2)
+The sign and the size depends on the hadron in question and QED stands for quantum
+electrodynamics.1,2 Recent lattice Monte Carlo simulations [3, 4] have verified this to a
+high accuracy, for light and charm mesons, by computing both the charged and the neutral
+mass and effectively using (1.1).
+One may take a different approach and compute the two differences in (1.2) separately
+by using the second order perturbation theory formula (with H = B for definiteness)3
+δmB|QED =
+−iα
+2mB(2π)3
+�
+d4q T (B)
+µν (q)∆µν(q) + O(α2) ,
+(1.3)
+with
+∆mB|QED ≡ δmB+|QED − δmB0|QED ,
+(1.4)
+known in the current algebra era [7, 8]. Above ∆µν(q) = 1
+q2 (−gµν+(1−ξ) qµqν
+q2 ) is the photon
+propagator, α = e2/(4π) the fine structure constant and T (B)
+µν (q) is the (uncontracted)
+forward Compton scattering tensor,
+T (B)
+µν (q) = i
+�
+d4xe−iq·x⟨B|Tjµ(x)jν(0)|B⟩ ,
+(1.5)
+1Strictly speaking the separation (1.2) is not well-defined as it requires fixing a (quark mass) renormal-
+isation scheme e.g. [3]. In turn this is a reason for being interested in the problem as, especially light,
+quark masses cannot be determined to high precision without folding in QED. This shows for example in
+the D-meson results in comparison between [3] and [4]. For our purposes ∆mB|mq is as defined from (1.7).
+2Effects due to the weak force are of O(Λ2
+QCD/m2
+W ) with respect to QED and are thus negligible. Similar
+effects are relevant in the context of neutral meson mixing e.g [5, 6].
+3Note that in the literature the notation ∆m2
+B ≡ 2mB∆mB is also frequently used.
+– 1 –
+
+with jα = �
+q Qq¯qγαq, the electromagnetic current.
+In 1963, Cottingham [9] improved this formula by parameterising it in terms of form
+factors and relating it to structure functions. That is, by deforming the contour q0 → iq0
+and writing a dispersion representation, assessing the number of subtraction terms of the
+form factors thus allowing him to write the contribution as an integral over Q2 = −q2 ≥ 0
+and ν = p · q/mB in the physical region. This opened the gate for many phenomenolog-
+ical studies saturating the dispersion relation by a few terms beyond the elastic one and
+using high energy constraints. This is a formidable task as one requires the knowledge of
+a correlation function over the entire energy range akin to the situation of the vacuum po-
+larisation for the anomalous magnetic moment. Some examples are for K, π [10, 11] using
+chiral perturbation theory (and large Nc), for B and D [12, 13] using heavy quark theory
+(and large Nc), for the proton-neutron [14] with updated fits to the structure functions and
+an approach to B, D, K and π using vector meson dominance [15]. Another interesting
+point, not unrelated, is that (1.3) requires renormalisation [16] and it was argued that it is
+justified to cut-off the Q2-integral. Debates about subtraction terms are ongoing cf. [14]
+and the response [17].
+Here we do not follow this phenomenological approach but evaluate (1.5) directly
+in Minkowski space using double dispersion relation sum rules and thus determine the
+mass differences from a unified framework (i.e. same hadronic input).4
+To the best of
+our knowledge this has not been done previously with sum rules, presumably due to the
+subtleties of non gauge-invariant interpolating currents [19, 20]. For example, in leptonic
+decays this requires the introduction of a non-local interpolating operator (or an auxiliary
+scalar field carrying the charge to infinity) for gauge invariance and reproduction of all
+infrared sensitive logs [20]. However, in the case at hand this is not necessary, as verified
+by explicit computation, since ∆mB is an infrared safe quantity.
+An efficient and transparent way to implement the first order quark mass corrections
+is to make use of the Feynman-Helmann theorem which gives
+m2
+B|mq =
+�
+q
+mq⟨B|¯qq|B⟩ ,
+(1.6)
+as rederived in App. D.1. For the difference (1.1) this gives
+∆mB
+��mq = (mu − md)
+2mB
+⟨B|¯qq|B⟩ + O((mu − md)2) .
+(1.7)
+The matrix element ⟨B|¯qq|B⟩ can be evaluated in the isospin degenerate limit q = u = d
+since we work to leading order (LO). For the B- and the D-meson we compute this matrix
+element whereas for the Kaon and the pion a soft theorem ⟨π|¯qq|π⟩ = − 2
+f2π ⟨0|¯qq|0⟩ +
+O(m2
+π/m2
+ρ), with fπ ≈ 131 MeV), due to their pseudo-Goldstone nature, proves more
+effective.
+In principle one could compute all the ∆mB|mq-effects with the QCD analogue of
+(1.3) but this would be rather inefficient and we further comment in the relevant section.
+4This function has been evaluated for the pion on the lattice with good agreement with experiment only
+very recently using the infinite volume reconstruction method [18].
+– 2 –
+
+Another noteworthy aspect is that we were not able to obtain stable sum rules for the pion
+(cf. Sec. 2.2).
+The paper is organised as follows. In Sec. 2 the electromagnetic computation is pre-
+sented, followed by the quark mass correction in Sec. 3. We give an overview of the results
+and the conclusions in Sec. 4. Comments on quark hadron duality, the numerical input.
+some (extra) computation and useful classic results are collected in Apps. A, C, B and D
+respectively.
+2
+Electromagnetic Mass Difference ∆mH|QED from QCD Sum Rules
+The electromagnetic mass difference follows from the formula quoted in (1.3) and it is our
+task to evaluate this. The main theoretical challenge is to incorporate the two hadrons for
+which a non-perturbative method is needed. We use QCD sum rules [21] with a double
+dispersion relation. The first step involves the adaption of an interpolating operator. For
+the heavy mesons a pseudoscalar current is suitable and has proven to give good results
+in many other contexts. For particles of light quark masses, and Goldstone particles in
+particular [22], pseudoscalar interpolating operators are unsuitable as they are infested by
+so-called direct instantons [23].5 We therefore discuss the heavy mesons and the K-meson
+separately in Secs. 2.1 and 2.2 respectively.
+An important criteria in assessing the validity of our sum rules is the so-called daughter
+sum rule which we consider worthwhile to present now. In the simple single dispersion
+relation case this criteria reads
+m2
+B(s0, M2) =
+� s0
+cut
+e−s/M2ρ(s)sds/(
+� s0
+cut
+e−s/M2ρ(s)ds) ,
+(2.1)
+where M2 is the Borel parameter, the “cut” marks the onset of physical states, ρ(s) =
+rBδ(s−m2
+B)+. . . is the spectral density and the dots stand for states above the continuum
+threshold s0. Formally, the residue rB drops out in the ratio. In practice ρ(s) is a continuous
+function in partonic computations and Eq. (2.1) should be seen as a self-consistency criteria
+for an s0 in the range of (mB + 2mπ)2 of (mB + 4mπ)2. If that is the case then Eq. (2.1)
+can be used to fix the central value of s0.
+2.1
+B- and D-meson with Pseudoscalar Operators
+As motivated at the beginning of the section, the default choice for heavy-light 0− meson
+interpolating operators are
+JB = m+¯biγ5q ,
+ZB ≡ ⟨ ¯B|JB|0⟩ = m2
+BfB ,
+m+ ≡ (mb + mq) .
+(2.2)
+In determining (1.3), one of the main challenges, is that the momenta for the two B-meson
+is degenerate. We bypass this problem by introducing an auxiliary momentum r into one
+5For the heavy mesons axial interpolating operators are unsuitable because the 1+ states are relatively
+low, e.g. for the JP = 0− B-meson with mB ≈ 5.28 GeV there is a 1+ B1(5721) with mB1 ≈ 5.72 GeV.
+This is too close to the two pion threshold and even below the typical continuum threshold s0 ≈ (6 GeV)2
+assumed for the pseudoscalar operators.
+– 3 –
+
+b
+¯q
+γ
+Figure 1. Diagrams contributing to the correlation function in (2.3) with the double line repre-
+senting the b-quark. (left) main diagram of the QbQq mixed type. (middle) b- and q-quark self
+energies. (right) ⟨¯qq⟩-condensate part to b-quark self energy. There is no corresponding part for
+the q-quark self energy since ⟨¯bb⟩ is negligibly small. For the mass difference only the first one is
+relevant while the others are useful to obtain stable sum rules as described in the text.
+of the currents and let it flow out at one of the two interpolating operators. Concretely we
+start from
+Γqq′(p2, ˜p2) = c i3
+�
+x,y,z,q
+ei(˜pz−ipy−(q+r)x)⟨0|TJ†
+B(z)jµ(x)jν(0)JB(y)|0⟩∆µν(q)|QqQq′
+=
+� ∞
+0
+ds
+� ∞
+0
+d˜s
+ρΓqq′(s, ˜s)
+(s − p2)(˜s − ˜p2) =
+Z2
+Bδqq′mB
+(m2
+B − p2)(m2
+B − ˜p2) + . . . ,
+(2.3)
+with c ≡
+−iα
+2mB(2π)3 , ˜p = p + r, shorthands xp = x · p,
+�
+q,x =
+�
+d4qd4x and the density is
+given by
+(2πi)2ρΓqq′(s, ˜s) = discs,˜s[Γqq′(s, ˜s)] ,
+(2.4)
+the double discontinuity with further relevant explanations at the end of the section. The
+quantity ∆qq′mB denotes the part proportional to the QqQq′-charges. Of course the aux-
+iliary momentum r has to disappear from the final result. This is achieved by the on-shell
+condition “˜p2 = p2” and is implemented in practice by treating them equally (p-˜p symme-
+try) and requiring the daughter sum rule to be satisfied reasonably well. The QCD sum
+rule is then given by
+δqq′mB =
+1
+Z2
+B
+� ¯δ(a)(m2
++)
+m2
++
+ds e
+(m2
+B−s)
+M2
+� ¯δ(a)(s)
+m2
++
+d˜s e
+(m2
+B−˜s)
+M2
+ρΓqq′(s, ˜s) ,
+(2.5)
+where M2 is the Borel parameter from the Borel transformation and the ¯δ(a) is the contin-
+uum threshold
+¯δ(a)(s) = 21/aσ0
+�
+1 −
+�
+s
+21/aσ0
+�a�1/a
+,
+(2.6)
+which is complicated for double dispersion sum rules [24]. Here it is implemented as in
+[25] but simplified since the two hadrons are identical implying M2 → 2 ˆ
+M2 and ˜s0 = ˜t0 =
+σ(a)
+0 21/a (allowing for elimination of those parameters). The number σ0 ≈ 35 GeV2 takes
+on the rˆole of s0 in (2.1) and we shall use the notation s0 ≡ σ0 hereafter for reasons of
+familiarity. The parameter a is a model-parameter and the independence of the result is a
+measure of the quality of the result itself.
+Let us turn to the computation. In perturbation theory there is the diagram connecting
+the q- to the b-quark and the self energies. We focus on the former, as it is numerically
+– 4 –
+
+dominant, and present the self energies and the condensate contribution in App. C. The
+computation can be done analytically and we obtain the following compact result for the
+density
+ρΓbq = NcαQqQbm2
++
+32π3mB
+·
+�
+λ˜λ
+s˜s
+�
+A + B
+b ln
+�a + b
+a − b
+��
+,
+(2.7)
+where
+a = m2
+q −
+1
+4
+√
+s˜s
+�
+s˜s + (m+m−)2�
++
+�
+q ↔ b
+�
+,
+b = 1
+2
+�
+λ˜λ
+s˜s ,
+A = m2
+− ,
+B =
+�
+Y ˜Y s˜s + 1
+2m2
+q
+√
+s˜s(Y + ˜Y ) − 1
+4m2
+−
+�
+s + ˜s + 4mbmq + 2m2
+q
+�
+− 1
+4m2
++
+√
+s˜s
+�
++
+�
+q ↔ b
+�
+,
+with further abbreviations
+m± = mb ± mq ,
+λ = λ(s, m2
+b, m2
+q) ,
+Y = s − m+m−
+2s
+,
+(2.8)
+λ(x, y, z) = x2 +y2 +z2 −2xy −2xz −2yz is the K¨all´en function and in the tilde quantities
+˜Y and ˜λ we have s → ˜s.
+A few words about the computation. We have taken the discontinuity in (2.4) using
+Cutkosky rules. A crucial point is that we do not cut the photon propagator as this would
+be a QED correction to the B-meson state and does not contribute to (1.3). This amends
+the meaning of (2.4).
+Let us turn to the usage of the auxiliary momentum r in the context of double dis-
+persion sum rules. First we note that this is different to a form factor computation, e.g.
+F π→π(q2) [26], where the momentum transfer naturally takes on the rˆole of this variable.
+It is closer to ∆F = 2 matrix elements as there is no momentum transfer but the flavour
+contractions naturally lead to a symmetric configuration (e.g. [27]) which is more straight-
+forward. In fact since our procedure (2.3) artificially breaks the bq-symmetry, a and B turn
+out to be non-symmetric whereas b and A remain symmetric. This has to be remedied by
+the following substitution
+a → 1
+2(a + a|b ↔ q) ,
+B → 1
+2(B + B|b ↔ q) ,
+(2.9)
+which is apparent from the way the Cutkosky cuts work out.
+We have performed the
+computation in general gauge. Of course Γqq′ is gauge dependent but as stated earlier its
+discontinuity in the bq-quark lines are not. This is the case since the particles are put
+on the mass shell and it is important that the quantity is infrared safe. Otherwise, as
+previously stated, one needs to introduce extra machinery [20].
+2.1.1
+Numerics
+Our numerics have three cornerstones, the hadronic input parameters in Tab. 2, the daugh-
+ter sum rule (2.1) and the choice of a mass scheme for mb. Whereas there is nothing to say
+about point one, the others are in need of some explanation. We start with the B-meson
+case. The daughter sum rule constrains the sum rule parameters: the continuum thresh-
+old s0 and the Borel parameter M2. Additional constraints, defining the Borel window,
+– 5 –
+
+are the convergence of the condensate expansion and keeping the B-pole term dominant
+versus the continuum contribution [21]. Let us turn to the question of the mass scheme
+which is not independent of the second point. We consider the pole-, the kinetic- and the
+MS-scheme. In the pole scheme the b, c-quark self energy contributions (perturbative and
+condensate, diagrams 2 and 4 in Fig. 1) vanish and the sum rules are not stable, that is
+no Borel window, and we therefore discard it. For the MS-scheme the b-quark self energies
+are dominant with the b-q contribution comparable to the condensates. Since these contri-
+butions cancel in the observable ∆m, this scheme is not ideal either and we therefore drop
+it. Hence we are left with the kinetic scheme for the b-quark which shows good properties
+as for the B → γ form factor [28] and the gBB∗γ-couplings [25]. For the c-quark the self
+energies are not dominant and we use the MS-scheme, also because the kinetic-scheme has
+proven unsuitable in for gDD∗γ [25].
+As stated above the daughter sum rule (2.1) is used to fix s0. For that purpose it is
+instructive to define the normalised ratio
+U(s0, M2) ≡
+1
+m2
+B
+· m2
+B(s0, M2) ,
+(2.10)
+of the sum rule value over the experimental one which has to be close to unity for self-
+consistency of the approach. This leads to
+{s0, ˆ
+M2}B = {35.2(1.0), 2.6(0.5)} GeV2 ,
+{s0, ˆ
+M2}D = {5.5(1), 1.0(0.25)} GeV2 , (2.11)
+for which
+U(s0 ± 1 GeV2, M2)∆mB|QED = 1 ± 0.01 ,
+U(s0 ± 0.1 GeV2, M2)∆mD|QED = 1 ± 0.01 .
+Using the input parameters in Tab. 2 (with mkin
+b
+(1 GeV), ¯mc( ¯mc)) and the fB,D sum
+rule to LO (cf. App. B.1) for the ZB-factor we get
+∆mB|QED = +1.58+0.26
+−0.23 MeV ,
+∆mD|QED = +2.25+0.89
+−0.52 MeV ,
+(2.12)
+where the error is obtained by adding the individual errors in quadrature. The dominant
+error is due to the heavy quark mass mb(c) (50-60%). The Borel mass M2 and duality
+parameters a each contribute a 20-25% uncertainty. The error in a is quantified by taking
+the standard deviation of the results with a ∈ [ 1
+2, 1, 2, ∞]. The errors for the D-meson are
+larger reflecting the generically inferior quality of the sum rule.
+2.2
+K-meson with Axial Operators
+As explained at the beginning of this section pseudo Goldstone bosons cannot be interpo-
+lated by pseudoscalar operators and one therefore resorts to axial ones
+Aµ = ¯q γµγ5 s ,
+⟨0|Aµ|K(p)⟩ = ipµfK .
+(2.13)
+The correlation function corresponding to (2.3) assumes the form
+Γαβ
+qq′(p2, ˜p2) = ci3
+�
+q
+�
+x,y,z
+ei(˜pz−py−(q+r)x)⟨0|TAα(z)jµ(x)jν(0)A† β(y)|0⟩∆µν(q)|QqQ′q
+– 6 –
+
+= gαβΓ(0)
+qq′ + pαpβΓ(2)
+qq′ + O(r) . . . ,
+(2.14)
+where the O(r)-terms are not of interest to us. The decisive information is in the pαpβ-term
+which takes on the form
+Γ(2)
+qq′ =
+f2
+Kδqq′m
+(m2
+K − p2)(m2
+K − ˜p2) + . . . ,
+(2.15)
+in a hadronic representation where the dots represent higher states in the spectrum (which
+includes the K∗-meson in this case).
+Let us turn to the computation which involves some practical matters. Computing
+the double discontinuity of Γ(2)
+qq′ is laborious as there are open Lorentz indices. One may
+though obtain the same information from a linear combination of (2.3) and (2.14) with
+contracted indices. It follows from Ward identities that (d = 4)
+Γ(2)(s, s) =
+1
+s2(1 − d) (sΓα
+α(s, s) − d Γ(s, s))) ,
+(2.16)
+where we omitted the qq′-subscript for brevity and have set s = ˜s. The generalisation to
+the s ̸= ˜s is in principle ambiguous but fortunately the differences are not that sizeable.
+Concretely we use
+Γ(2)(s, ˜s) =
+1
+s˜s(1 − d)
+�1
+2(s + ˜s)Γα
+α(s, ˜s) − d Γ(s, ˜s))
+�
+,
+(2.17)
+and the analogous expression of (2.7) is lengthy for the Kaon and is given in a Mathematica
+ancillary notebook attached to the arXiv version.
+Changing the prescription (2.17) by 1
+2(s + ˜s) →
+√
+s˜s results in a 15%-change which
+is sizeable but not extremely large and well within the error. In addition we use a weight
+function 1/s˜s as described in App. A.2 as otherwise the daughter sum rule is off by at least
+a factor of two which is very large in view of how well it works in all other cases.
+Proceeding as before we obtain the following values
+{s0, ˆ
+M2}K = {0.7(1), 0.95(0.5)} GeV2 ,
+U(s0 ± 0.1, M2)∆mK|QED = 1.00 ± 0.10 , (2.18)
+for the sum rule parameters and the daughter sum rule (2.10). Using the input parameters
+in Tab. 2, the fK sum rule to LO (cf. App. B.1) and (2.18) we get
+∆mK|QED = +1.85+0.42
+−0.66 MeV .
+(2.19)
+Scale dependent quantities are evaluated at µ = 2 GeV. The uncertainty again comes from
+adding individual errors in quadrature. The dominant uncertainty (75%) comes from the
+ms mass with the remaining uncertainty due to the the duality parameter a in (2.6).
+As stated in the introduction, the pion proved more difficult. That is we were not able
+to find stable sum rules satisfying the daughter sum rule for reasonable values of the con-
+tinuum threshold.6 We believe that is due to its small mass mπ which is considerably below
+the other hadronic masses. Conversely the Kaon mass, while being a pseudo-Goldstone, is
+much closer to the other hadrons (due to ms being close to ΛQCD).
+6The extra disconnected diagram for the π0, e.g. [18], is small since the γ5 generates a Levi-Civita tensor
+which enforces two extra loops. This is reflected in the smallness of the lattice result [18] and also by the
+fact that the LO chiral Lagrangian does not contribute to π0 (cf. App. D.2).
+– 7 –
+
+3
+Linear Quark Mass Correction ∆mH|mq
+As stated in the introduction (and cf. App. D.1) the O(mq)-corrections are governed by
+⟨H|¯qq|H⟩ (1.7). For the B, D-meson we compute this matrix element from QCD sum rules
+in Sec. 3.1, using similar techniques as for the QED correction, and for light mesons we
+resort to soft theorems cf. Sec. 3.3 as the corresponding sum rules are inferior.
+3.1
+QCD Sum Rule Computation of ⟨ ¯H|¯qq| ¯H⟩ for H = B, D
+In order to anticipate the hierarchy of diagrams shown in Fig. 2 it is worthwhile to con-
+template on the heavy quark behaviour.
+The matrix element scales like (H = B) for
+definiteness).
+⟨B|¯qq|B⟩ = O(mb) ,
+(3.1)
+for relativistically normalised states, ⟨B(p)|B(q)⟩ = 2EB(⃗p)(2π)3δ(3)(⃗p − ⃗q), due to the
+factor EB = O(mb). On the one hand, the operator ¯qq demands a chirality flip in pertur-
+bation theory and this cannot come from the mb-mass since the latter is entirely kinematic
+as we have just established. On the other hand the condensate contribution itself ⟨¯qq⟩ does
+not require this flip and is therefore unsuppressed and numerically leading.
+b
+¯q
+g
+Figure 2. Diagrams contributing to the matrix element ⟨B|¯qq|B⟩. They are analogous to the
+ones in Fig. 1 but the square blob denotes the insertion of the ¯qq-operator. Perturbation theory
+is minimal and the quark condensate diagram is the main contribution. The mixed condensate
+diagrams ⟨¯qGq⟩ are mainly useful to stabilise the sum rule.
+To do the computation we start from the following correlation function
+Π(p2, ˜p2, r) = i2
+�
+y,z
+ei(˜pz−py−xr)⟨0|TJ†
+B(z)(¯qq)(x)JB(y)|0⟩ ,
+(3.2)
+where JB has been defined in (2.2) and the auxiliary momentum r takes on the same rˆole
+as before. The double dispersion relation of the correlation functions reads
+Π(p2, ˜p2, r) =
+�
+dsd˜s ρΠ(s, ˜s)
+(s − p2 − i0)(˜s − ˜p2 − i0) =
+Z2
+B⟨ ¯B|¯qq| ¯B⟩
+(m2
+B − p2)(m2
+B − ˜p2) + . . . .
+(3.3)
+with (2πi)2ρΠ(s, ˜s) = discs,˜s[Π(s, ˜s)], and the matrix element is then given by
+⟨ ¯B|¯qq| ¯B⟩ =
+1
+Z2
+B
+� ¯δ(a)(m2
++)
+m2
++
+ds e
+(m2
+B−s)
+M2
+� ¯δ(a)(s)
+m2
++
+d˜s e
+(m2
+B−˜s)
+M2
+ρΠ(s, ˜s) ,
+(3.4)
+with ¯δ(a) defined in (2.6). The three contributions depicted in Fig. 2 are described below.
+– 8 –
+
+• Perturbation theory is given by
+ρΠ(s, ˜s) = m2
++Ncmq
+2π2
+s − (mb − mq)2
+s + m2q − m2
+b
+λ
+1
+2 δ(˜s − s) ,
+(3.5)
+with the anticipated O(mq)-suppression. This term is negligible.
+• The ⟨¯qq⟩ condensate evaluates to
+⟨ ¯B|¯qq| ¯B⟩ = −4m2
++m2
+b⟨¯qq⟩
+Z2
+B
+e
+2(m2
+B−m2
+b)
+M2
+,
+(3.6)
+which is not suppressed by O(mq) and thus dominant.
+• The mixed condensate yields
+⟨ ¯B|¯qq| ¯B⟩ = − m2
++⟨¯qσsggGq⟩
+Z2
+B
+e
+2(m2
+B−m2
+b)
+M2
+�
+(1 − 3m2
+b
+M2 ) + (5
+8 + 2m2
+b
+M2 − 4m4
+b
+M4 )
+�
+, (3.7)
+which is not suppressed either as it is in the same chirality representation as the
+quark condensate. The first and second term in round brackets are from the third
+and fourth diagram in Fig. 2.
+We consider it worthwhile to comment how the lack of mq-suppression in the condensate
+contribution arises. Its origin is the propagator 1/(r2 − m2
+q + iϵ) (we work in the ⃗r = 0
+frame)
+r2 − m2
+q + iϵ = (√s − (
+√
+˜s + mq − iϵ′))(√s − (
+√
+˜s − mq + iϵ′)) ,
+(3.8)
+which when cut gives a term of the form
+√s
+mq δ(s − (
+√
+˜s + mq)2). The 1/mq thus removes
+the O(mq)-suppression in the numerator. Numerically perturbation is entirely negligible
+and this is also the reason for not including the gluon condensate which is expected to be
+further suppressed O(Λ4
+QCD/M 4) as compared to perturbation theory.
+3.1.1
+Numerics
+The basic procedure for the numerics is the same as described in Sec. 2.1.1. However,
+the choice of scheme is not as important in this case. Any of the schemes, pole, kinetic
+and MS give similar results and indicate stability. The situation is certainly clearer with
+respect to the mb-mass itself as the matrix element is O(mb) (3.1) and ∆mB|mq itself is
+O(m0
+b) whereas ∆mB|QED is computed from a non-local correlation function where the mb-
+dependence is more difficult to track. Since the perturbative contribution is suppressed,
+there is no s0 dependence (there would be at NLO in αs). Hence we can fix the Borel value
+M2 to satisfy the daughter sum rule (2.10), obtaining the following sum rule parameters
+{s0, ˆ
+M2}B = {35.0, 4.0} GeV2 ,
+{s0, ˆ
+M2}D = {6.0, 0.75} GeV2 ,
+(3.9)
+and daughter sum rules
+U(s0, ˆ
+M2 ± 0.15 GeV)∆mB|mq = 1.00+0.03
+−0.02 ,
+– 9 –
+
+U(s0, ˆ
+M2 ± 0.05 GeV)∆mD|mq = 1.00+0.20
+−0.12 .
+(3.10)
+Using the input parameters in Tab. 2 (with mkin
+b
+(1 GeV), ¯mc( ¯mc)), the fB,D sum rule to
+LO (cf. App. B.1) and (3.9) we get
+⟨ ¯B|¯qq| ¯B⟩µ=1 GeV = 5.99+1.99
+−1.41 GeV ,
+⟨ ¯D|¯qq| ¯D⟩µ= ¯mc GeV = 3.40+1.78
+−1.71 GeV ,
+(3.11)
+for the matrix elements and
+∆mB|mq = −1.88+0.49
+−0.71 MeV ,
+∆mD|mq = +2.68+1.48
+−1.38 MeV ,
+(3.12)
+for the mass differences.
+As this is a LO computation the errors are large, primarily coming from M2 with a
+small contribution (20%) from the light quark masses. Note that the set value of M2 is not
+independent of higher order αs corrections. For the D-meson especially, the convergence of
+the sum rule is not good. This is reflected in the mixed condensate contributing a sizeable
+20%-uncertainty.
+3.2
+SU(3)F estimates of ⟨ ¯H|¯qq| ¯H⟩ for H = B, D
+Alternatively, one may use SU(3)F flavour symmetry ⟨B|¯qq|B⟩ ≈ ⟨Bs|¯ss|Bs⟩ to estimate
+⟨B|¯qq|B⟩ [12]. Following this analysis one may write (mud ≡ 1
+2(mu + md))
+(2m2
+Bs − m2
+B+ − m2
+B0) = 2(ms − mud)⟨B|¯qq|B⟩ ,
+(3.13)
+from which
+⟨B|¯qq|B⟩ ≈ m2
+Bs − m2
+B
+(ms − mud) ,
+(3.14)
+follows. Employing the input from the PDG [29] this leads to7
+∆mB|mq = −2.37+0.35
+−0.43 ± 20%SU3 MeV ,
+∆mD|mq = +2.81+0.51
+−0.41 ± 20%SU3 MeV . (3.16)
+We have added a characteristic 20% SU(3)F -violation due to the use of the ⟨B|¯qq|B⟩ ≈
+⟨Bs|¯ss|Bs⟩. The result are well compatible with (3.12) and we shall not use them any
+further. Note that in the heavy quark limit we have ∆mB|mq = −∆mD|mq since the c
+and b are up and down quark types respectively. This heavy quark limit relation holds
+reasonably as already observed in [12] (with slightly different input).
+3.3
+Soft Goldstone estimate of ⟨L|¯qq|L⟩ for L = π, K
+The matrix elements ⟨L|¯qq|L⟩ where L = π, K is a pseudo-Goldstone boson may be es-
+timated using soft-pion techniques which in this case lead to the famous GMOR-relation
+[31]. Concretely [32]
+m2
+π+,0 = (mu + md)B0 ,
+m2
+K+ = (mu + ms)B0 ,
+m2
+K0 = (md + ms)B0 ,
+(3.17)
+7Or taking the η → 3π analysis [30], which in this case makes a difference, results in
+∆mB|mq = −2.54+0.17
+−0.18 ± 20%SU3 MeV ,
+∆mD|mq = +3.01+0.21
+−0.20 ± 20%SU3 MeV ,
+(3.15)
+a more precise result.
+– 10 –
+
+which are to first order in the quark masses, with no QED corrections and the constant is
+B0 = − 2⟨¯qq⟩
+f2π
+≈ 2.26 GeV at µ = 2 GeV. We see that for the pions there is no difference to
+linear order which is a consequence of isospin [10]. The pion mass splitting is a ∆I = 2
+isospin effect since the relevant matrix element has two pion states where the quark masses
+themselves are of ∆I = 1. Hence it takes at least two powers of the quark mass difference.
+Fortunately, the latter follows in a straightforward manner from chiral perturbation theory
+and one obtains to LO
+∆mK|mq = mu − md
+ms − mud
+m2
+K − m2
+π
+2mK
+= mu − md
+2mud
+m2
+π
+2mK
+= − 6.74+0.98
+−1.21 MeV ,
+∆mπ|mq = 1
+16
+md − mu
+ms − mud
+md − mu
+mud
+mπ
+= + 0.16+0.06
+−0.05 MeV ,
+(3.18)
+using the values from the PDG [29]. As expected the pion contribution is rather small
+as a result of being second order in the quark mass difference. It is noteworthy that one
+obtains ∆mK|mq ≈ −5.7 MeV when using (3.17) directly which can be seen as a SU(3)F
+correction which is well covered by the quoted uncertainty.
+4
+Final Overview and Conclusions
+In this paper we have computed the mass difference of the charged and neutral B-, D-
+and K-mesons. The results, which originate from electromagnetic and quark mass effects,
+are summarised and contrasted with experimental values in Tab. 1. The electromagnetic
+contribution is computed from the second order formula (1.3) in Sec. 2 and may be regarded
+as the core part of this paper. ∆mπ|QED is taken from a soft-pion theorem (cf. App. D.2)
+for completeness and comparison. Quark mass effects are obtained from the Feynman-
+Hellman formula (1.7) and its corresponding matrix element is computed in Sec. 3.1 for
+the B and the D respectively whereas for the K and the π a soft theorem turns out to be
+more reliable.
+The results obtained are consistent with the current experimental values. The uncer-
+tainties are above 20% and indeed more cannot be expected from a double dispersion sum
+rule at leading order in the strong coupling constant. Experimental uncertainties are one
+or two orders of magnitude lower.
+The values in Tab. 1 deserves some comments as they are not easily guessed by rules of
+thumb by a practitioner in non-perturbative QCD. The parametric estimate of ∆mH|QED =
+c Qeff
+H
+α
+πΛQCD with ΛQCD = 200 MeV and Qeff
+D = 2Qeff
+B,K = 2/3, leads to c ≈ 10-20 which is a
+rather large number. To put this into perspective, one should keep in mind that these kind
+of estimates are not straightforward as the mass difference is obtained from a non-local
+(long distance) correlation function (1.3). The scale for the quark mass effect is of course
+set me mu−md ≈ 2.5 MeV and its sign depends on whether the non q = u, d quark is of the
+up (charm) or down (beauty, strange) type quark. The cancellation to almost an order of
+magnitude of the electric and the quark mass contribution for the B-meson is remarkable,
+leading to an inflated uncertainty in ∆mB.
+The main aim of this paper was to show that it is possible to understand the isospin
+mass difference from QCD sum rules, that is to obtain values compatible with experiment.
+– 11 –
+
+H
+∆mH|QED
+∆mH|mq
+∆mH
+∆mH|PDG[29]
+B
++1.58(24) MeV
+−1.88(60) MeV a
+−0.30(65) MeV
+−0.32(5) MeV
+D
++2.25(70) MeV
++2.7(1.4) MeV a
++4.9(1.6) MeV
++4.822(15) MeV
+K
++1.85(54) MeV
+− 6.7(1.1) MeV b
+−4.9(1.2) MeV
+−3.934(20) MeV
+π
++4.8(1.2) MeV c
++0.16(5) MeV b
++5.0(1.2) MeV
++4.5936(5) MeV
+Table 1. Our values of ∆mH due to the electromagnetic mass difference and the quark masses
+compared to the PDG values. The entries marked with a are obtained from the ⟨H|¯qq|H⟩ matrix
+element in conjunction with the Feynman-Hellman theorem (valid to LO in mq). The values in italic
+should not be regarded as predictions of this work. E.g. bderived from the soft theorem for (pseudo-)
+Goldstone bosons (cf. App. 3.3) and cresults from soft theorem in conjunction with the Weinberg sum
+rules (cf. App. D.2). It is noteworthy that ∆mπ|mq = O((mu − md)2) which explains its smallness.
+For comparison some lattice values ∆mD = 5.47(53) MeV and ∆mK = −4.07(15)(15) MeV [4] and
+∆mD = 4.68(10)(13) MeV [3] which are of course more precise as the lattice is suited for mass
+determination, even in the presence of QED, and due to the full inclusion of QCD.
+The sum rule computation could be improved by including radiative corrections in the
+strong coupling constant which would be a formidable task. Perhaps more interestingly,
+the formalism developed in this paper could be applied to baryons to obtain the proton-
+neutron mass difference for instance.
+Acknowledgments
+RZ is supported by a CERN associateship and an STFC Consolidated Grant, ST/P0000630/1.
+We are grateful to Michele Della Morte, Antonin Portelli and Max Hanson for informative
+comments on the lattice literature.
+A
+Variants of Quark-Hadron Duality
+In this appendix we elaborate on variations of quark-hadron duality. This is best explained
+by example. Consider the axial correlator in connection with the K
+Παβ = i
+�
+d4xeipx⟨0|TA†
+α(x)Aβ(0)|0⟩ = pαpβΠ(p2) + gαβ ˆΠ(p2) ,
+(A.1)
+with Aβ defined in (2.13). The Kaon appears in the first structure
+Π(p2) =
+f2
+K
+m2
+K − p2 + . . . ,
+(A.2)
+where the dots stand for higher states as usual. QCD sum rules consists of two steps.
+Firstly the observation that
+Π(p2) ≈ Π(p2)pQCD ,
+(A.3)
+for some p2 outside the physical region (could be p2 < 0), where pQCD stands for per-
+turbative QCD with OPE improvements. In a second step one rewrites Eq. (A.3) as a
+– 12 –
+
+dispersion relation followed by a Borel transform under which (s − p2)−1 → exp
+�
+−s/M2�
+(M2 is the Borel parameter) which results in
+� ∞
+0
+e−s/M2ρ(s) ≈
+� ∞
+0
+e−s/M2ρpQCD(s) ,
+(A.4)
+with ρ(s) =
+1
+2πidiscsΠ(s) = f2
+Kδ(s − m2
+K) + . . . and the pQCD part is defined analogously.
+The one assumption is then that this integral can be broken up as follows
+� s0
+0
+e−s/M2ρ(s) ≈
+� s0
+0
+e−s/M2ρpQCD(s) ,
+(A.5)
+and (A.5) is sometimes referred to as semi-global quark hadron duality [33]. One way to
+determine s0 is to impose the daughter sum rule (2.1) and then for consistency with the
+duality assumption s0 ought to be somewhere between (mK + 2mπ)2 and (mK + 4mπ)2.
+We want to briefly contemplate for which types of weight functions ω(s) (A.5)
+� s0
+0
+e−s/M2ρ(s)ω(s) ≈
+� s0
+0
+e−s/M2ρpQCD(s)ω(s) ,
+(A.6)
+with corresponding (2.1)
+m2
+B =
+� s0
+cut
+e−s/M2ρpQCD(s)ω(s) s ds/(
+� s0
+cut
+e−s/M2ρpQCD(s)ω(s)ds) ,
+(A.7)
+can hold. The crucial point is to be able to justify the analogue of Eq. (A.3).
+A.1
+Weight function ω(s) = s
+We might start by rewriting the pαpβ-part in (A.1) as follows
+pαpβΠ(p2) = pαpβ
+p2 (p2Π(p2)) .
+(A.8)
+For the pQCD part one may directly write ρpQCD(s) → sρpQCD(s) since p2 does not lead
+to new singularities. Using (A.2), the QCD part can be written as
+(p2Π(p2)) = p2
+f2
+K
+m2
+K − p2 + · · · = −f2
+K + m2
+K
+f2
+K
+m2
+K − p2 + . . . ,
+(A.9)
+where −f2
+K is a constant that will disappear under Borel transformation and thus ρ(s) →
+sρ(s) works the very same way. The analogue of (A.3) can be justified in this case by re-
+placing A†
+α(x) → −∂2A†
+α(x) (A.1).8 Weight functions of polynomials are generally referred
+to as moments and are familiar to the community e.g. moments in b → cℓν for example
+[34]. It is quite clear that one can not take arbitrarily high powers of moments as then
+duality will be challenged since smoothness is lost.
+8In our case this is not trivial as A†
+α is not QED gauge invariant but it can still be used at LO. In the
+general case this requires more thought.
+– 13 –
+
+A.2
+Weight function ω(s) =
+1
+s−η
+Choosing a weight function
+ω(s) =
+1
+s − η ,
+(A.10)
+is equivalent to working with a subtracted dispersion relation fo the form
+Π(p2) − Π(η)
+p2 − η
+=
+�
+dsρ(s)
+(s − p2)(s − η) + c ,
+(A.11)
+where c = −
+�
+dsρA(s)/(s(s − η)) + Π
+′(η) is a subtraction constant such that the limit
+p2 → 0 comes out correctly. The constant c is though not important in the end as it
+vanishes under Borel transformation. The question of whether one can use (A.10) then
+turns into the question whether the left hand side can be computed reliably.
+In our application to Kaons we have chosen η = 0 which is close but still below the
+Kaon resonance.
+We have checked that for the fK sum rule with s0 = 0.7 GeV2 the
+agreement is reasonable and this serves at least as a partial justification of the procedure
+in Sec. 2.2.
+B
+Numerical Input
+The numerical QCD input is summarised in Tab. 2 and below we give the numerical values
+of the the decay constant from sum rule which are the effective LSZ factors.
+B.1
+Decay constants fB, fD and fK
+The extraction of both the QED mass shifts and the linear quark mass corrections, require
+values for the decay constants fB, fD and fK. Note that, for consistency with the rest of
+this paper these are evaluated at LO in QCD. The LO expressions for the pseudoscalar
+(B, D) and axial (K) correlators are well known (e.g. [38, 39]). The following values
+fB = 0.157 GeV ,
+{s0, M2} = {33.5, 6.0} GeV2 ,
+fD = 0.158 GeV ,
+{s0, M2} = {5.7, 2.0} GeV2 ,
+fK = 0.147 GeV ,
+{s0, M2} = {1.1, 1.5} GeV2 ,
+(B.1)
+are obtained.
+C
+Self Energies and Condensates for ∆mH|QED
+In this appendix we present some extra computations: the self energies and condensate
+contributions to ∆mB|QED. These are important for stabilising the sum rules but do not
+affect the actual value of ∆mB|QED per se. This is the case since graphs proportional to
+Q2
+b are cancelled in the mass difference. The only non-zero graph contributing to the mass
+shift is the q-q self energy, but it is numerically negligible. We wish to note that in all these
+graphs explicit gauge independence has been verified to hold after the double-cut is taken.
+– 14 –
+
+JP = 0− Meson masses [29]
+mB
+mBs
+mD
+mDs
+mK
+mπ
+5.280 GeV
+5.367 GeV
+1.867 GeV
+1.968 GeV
+0.496 GeV
+0.137 GeV
+JP = 0− Mass Differences [29]
+∆mB
+∆mD
+∆mK
+∆mπ
+−0.32(5) MeV
++4.822(15) MeV
+−3.934(20) MeV
++4.5936(5) MeV
+Quark masses [29]
+¯mb(mb)
+¯mc(mc)
+mpole
+b
+mpole
+c
+mkin
+b
+|1GeV
+mkin
+c
+|1GeV
+4.18+0.03
+−0.02 GeV
+1.27(2) GeV
+4.78(6) GeV
+1.67(7) GeV
+4.53(6) GeV
+1.13(5)
+¯ms|2GeV
+¯md|2GeV
+¯mu|2GeV
+¯mud|2GeV
+¯mu
+¯md
+¯ms
+¯mud
+93.4+8.6
+−3.4 MeV
+4.67+0.48
+−0.17 MeV
+2.16+0.49
+−0.26 MeV
+3.45+0.35
+−0.15 MeV
+0.474+0.056
+−0.074
+27.33+0.67
+−0.77
+Condensates
+⟨¯qq⟩|2GeV [35]
+⟨¯ss⟩|2GeV [36]
+m2
+0 [37]
+⟨0| α
+πG2|0⟩ [21]
+−(269(2) MeV)3
+1.08(16) ⟨¯qq⟩
+0.8(2) GeV2
+0.012(4) GeV4
+Table 2. Summary of input parameters. Note as inputs into the sum rules we use mH = mH−,
+as which has a completely negligible impact. The quantity mud ≡ 1
+2(mu + md) is the light quark
+average. The mixed condensate is parameterised as ⟨¯qσsggGq⟩ = m2
+0⟨¯qq⟩ as is standard in the
+literature.
+C.1
+Perturbation theory
+The perturbative b-b self energy graph, after mass renormalisation, takes on the form
+ρΓbb(s, ˜s) = Ncm2
++Q2
+bα
+32π3mB
+· λ
+1
+2 ·
+s − m2
+−
+s + m+m−
+fR(m2
+b)δ(˜s − s) ,
+(C.1)
+with the renormalised fR9
+fR(m2) = f(m2) + 32π2m2
+e2
+δZm =
+�
+�
+�
+�
+�
+�
+�
+�
+�
+�
+�
+2m2
+�
+4 + 3 ln µ2
+m2
+�
+,
+MS
+0,
+Pole
+2m2
+�
+16µ
+3m + 2µ2
+m2
+�
+,
+Kinetic
+(C.2)
+f(m2) = 4m2B0(m2, 0, m2) + (d − 2)A0(m2) .
+(C.3)
+The functions A0 and B0 are the standard Passarino-Veltman functions with (FeynCalc)
+normalisation (2πµ)2ϵ �
+ddk /(iπ2). Explicitly these are
+B0(m2, 0, m2) = 1
+ˆϵ + 2 + log
+� µ2
+m2
+�
+,
+A0(m2) = m2
+�1
+ˆϵ + 1 + log
+� µ2
+m2
+��
+,
+(C.4)
+with 1
+ˆϵ = 1
+ϵ − γE + log 4π. The q-q graph can be obtained by replacing b → q in the result
+and since it is O(m2
+q) it is negligible.
+9Note that the vanishing in the pole scheme is clear, by the very definition of the scheme, since we are
+on-shell after the cuts.
+– 15 –
+
+C.2
+Condensates
+The only relevant condensate graph is given in Fig. 1 (4th diagram). With mq → 0 the
+density is
+ρ⟨¯qq⟩
+Γbb = −m2
+bαQ2
+b
+8πmB
+mb⟨¯qq⟩δ(s − m2
+b)δ(˜s − m2
+b)fR(m2
+b) .
+(C.5)
+Light quark mass corrections come from Taylor expanding the quark fields, leading to
+derivatives of δ-functions. It is thus more convenient to directly display the resulting mass
+shift
+∆mB|⟨¯qq⟩ = − m2
++αQ2
+b
+8πmBZ2
+B
+e
+2(m2
+B−m2
+b)
+M2
+⟨¯qq⟩
+�
+mb − mq
+4
+�
+1 + 4m2
+b
+M2
+��
+fR(m2
+b)
+(C.6)
+The ⟨¯qq⟩ condensate graph where the photon connects the b and the q-quark is not of
+short distance type (it leads to 1/m2
+q in the propagator) and is therefore omitted. This
+is similar to the B → γ form factor although in that case the physics is covered by the
+photon distribution amplitude (e.g. [28]).
+D
+Some Classic Results
+In this appendix we summarise some classic results which are of use and referred to in the
+paper.
+D.1
+Linear quark mass dependence from Feynman-Hellman theorem
+In order to derive the Feynman-Hellman theorem it is convenient to use states ⟨ ˆB(p)| ˆB(q)⟩ =
+(2π)3δ(3)(⃗p−⃗q) normalised in a non-relativistic manner (the translation to the usual states
+is | ˆB⟩ = |B⟩/√2EB). Taking the derivative of ⟨ ˆB|H| ˆB⟩ (using ∂mq⟨ ˆB(p)| ˆB(q)⟩ = 0) one
+obtains
+mq∂mqEB = mq⟨ ˆB|¯qq| ˆB⟩ ,
+(D.1)
+which is equivalent to
+mq∂mq2E2
+B = 2mq⟨B|¯qq|B⟩ ,
+(D.2)
+which in turn is consistent with
+m2
+B|mq =
+�
+q
+mq⟨B|¯qq|B⟩ ,
+(D.3)
+since the momenta are independent of the mass. This is the relation quoted in (1.6) in the
+main text.
+D.2
+∆mπ|QED from soft theorem and Weinberg sum rules
+Using soft-pion techniques it was shown that [2]
+∆mπ|QED =
+3α
+8πmπf2π
+� ∞
+0
+dss ln µ2
+s (ρV (s) − ρA(s)) + O(m2
+π/m2
+ρ) ,
+(D.4)
+where ρV = fρδ(s − m2
+ρ) + . . . is the spectral density of the vector triplet current and ρA
+is the analogous quantity for the axial case. The ln s-term originates from integrating over
+– 16 –
+
+the photon momentum d4q. We refer the reader to [10] for an improved treatment using
+chiral perturbation theory. In fact, as is the case for all soft-pion results, Eq. (D.4) follows
+from the LO electromagnetic term in the Lagrangian and can therefore be systematically
+improved beyond the soft limit to the extent that its low energy constants (i.e. couplings)
+are known. Using the Weinberg sum rules [40], which are phenomenologically successful,
+a good estimate was obtained [2]. Taking the equations resulting from the so-called first
+and second Weinberg sum rule in [41], then
+f2
+ρ = f2
+a1 + f2
+π ,
+m2
+ρf2
+ρ = m2
+a1f2
+a1 ,
+(D.5)
+(where the chiral limit mq = 0 is assumed). Moreover, the spectral functions are trun-
+cated after the first vector meson resonances ρ and a1 which can be justified as the chiral
+symmetry is restored at high energy. Using these in expressions in (D.4) one gets
+∆mπ|QED = 3α
+8π
+m2
+ρf2
+ρ
+m2πf2π
+mπ ln
+f2
+ρ
+f2ρ − f2π
+≈ 4.8 MeV ,
+(D.6)
+for fπ = 131 MeV, mρ = 0.77 MeV [29] and fρ = 215 MeV [42]. Since the quark mass
+effect is small O((mu −md)2) (3.18), one has ∆mπ ≈ ∆mπ|QED which is rather close to the
+experimental value ∆mπ = +4.5936(5) MeV [29]. Clearly (D.6) is a crude approximation
+as more detailed analyses [10, 43] including finite width effects yields a result which is ca
++1.2 MeV larger [43]. We therefore assign an uncertainty of this amount to ∆mπ|QED in
+Tab. 1.
+It is also worthwhile to mention two other interesting aspects in conjunction with
+∆mπ|QED. First, by using by using QCD inequalities it has been shown that ∆mπ|QED ≥ 0
+[44] which is of course well satisfied. Second Dashen’s theorem [45] states that ∆m2
+π|QED −
+∆m2
+K|QED = O(αms, αmq ln mq) as a result of degeneracy in the SU(3)F limit ms = md =
+mu. The corrections seem rather large and are largely kinematic, the larger K mass in the
+Kaon propagator [46]. Lattice Monte Carlo simulations have settled this matter to large
+precision [47] (cf. [48] for a review).
+References
+[1] A. Zee, “The Proton - neutron mass difference problem and related topics,” Phys. Rept. 3
+(1972) 127–192.
+[2] T. Das, G. S. Guralnik, V. S. Mathur, F. E. Low, and J. E. Young, “Electromagnetic mass
+difference of pions,” Phys. Rev. Lett. 18 (1967) 759–761.
+[3] S. Borsanyi et al., “Ab initio calculation of the neutron-proton mass difference,” Science 347
+(2015) 1452–1455, arXiv:1406.4088 [hep-lat].
+[4] D. Giusti, V. Lubicz, C. Tarantino, G. Martinelli, F. Sanfilippo, S. Simula, and N. Tantalo,
+“Leading isospin-breaking corrections to pion, kaon and charmed-meson masses with
+Twisted-Mass fermions,” Phys. Rev. D 95 no. 11, (2017) 114504, arXiv:1704.06561
+[hep-lat].
+[5] I. I. Bigi and A. I. Sanda, CP violation, vol. 9. Cambridge University Press, 9, 2009.
+– 17 –
+
+[6] G. C. Branco, L. Lavoura, and J. P. Silva, CP Violation, vol. 103. 1999.
+[7] R. P. Feynman and G. Speisman, “Proton-Neutron Mass Difference,” Phys. Rev. 94 no. 2,
+(1954) 500.
+[8] M. Cini, E. Ferrari, and R. Gatto, “Neutron-Proton Mass Difference by Dispersion Theory,”
+Phys. Rev. Lett. 2 no. 1, (1959) 7–9.
+[9] W. N. Cottingham, “The neutron proton mass difference and electron scattering
+experiments,” Annals Phys. 25 (1963) 424–432.
+[10] J. F. Donoghue and A. F. Perez, “The Electromagnetic mass differences of pions and kaons,”
+Phys. Rev. D 55 (1997) 7075–7092, arXiv:hep-ph/9611331.
+[11] W. A. Bardeen, J. Bijnens, and J. M. Gerard, “Hadronic Matrix Elements and the pi+ pi0
+Mass Difference,” Phys. Rev. Lett. 62 (1989) 1343.
+[12] P. Colangelo, M. Ladisa, G. Nardulli, and T. N. Pham, “Electromagnetic mass difference of
+heavy mesons,” Phys. Lett. B 416 (1998) 208–215, arXiv:hep-ph/9709201.
+[13] M. A. Luty and R. Sundrum, “Heavy meson electromagnetic mass differences from QCD,”
+Phys. Rev. D 52 (1995) 1627–1638, arXiv:hep-ph/9502259.
+[14] A. Walker-Loud, C. E. Carlson, and G. A. Miller, “The Electromagnetic Self-Energy
+Contribution to Mp − Mn and the Isovector Nucleon MagneticPolarizability,” Phys. Rev.
+Lett. 108 (2012) 232301, arXiv:1203.0254 [nucl-th].
+[15] T. Hambye, “A Unified treatment of mass differences for light and heavy pseudoscalars,”
+Phys. Lett. B 319 (1993) 300–306.
+[16] J. C. Collins, “Renormalization of the Cottingham Formula,” Nucl. Phys. B 149 (1979)
+90–100. [Erratum: Nucl.Phys.B 153, 546 (1979), Erratum: Nucl.Phys.B 915, 392–393 (2017)].
+[17] J. Gasser, M. Hoferichter, H. Leutwyler, and A. Rusetsky, “Cottingham formula and nucleon
+polarisabilities,” Eur. Phys. J. C 75 no. 8, (2015) 375, arXiv:1506.06747 [hep-ph].
+[Erratum: Eur.Phys.J.C 80, 353 (2020)].
+[18] X. Feng, L. Jin, and M. J. Riberdy, “Lattice QCD Calculation of the Pion Mass Splitting,”
+Phys. Rev. Lett. 128 no. 5, (2022) 052003, arXiv:2108.05311 [hep-lat].
+[19] R. Zwicky, “QED-Corrections to Weak Decays,” Symmetry 13 no. 11, (2021) 2036,
+arXiv:2205.06194 [hep-ph].
+[20] S. Nabeebaccus and R. Zwicky, “Resolving charged hadrons in QED — gauge invariant
+interpolating operators,” JHEP 11 (2022) 101, arXiv:2209.06925 [hep-ph].
+[21] M. A. Shifman, A. I. Vainshtein, and V. I. Zakharov, “QCD and Resonance Physics.
+Theoretical Foundations,” Nucl. Phys. B147 (1979) 385–447.
+[22] V. A. Novikov, M. A. Shifman, A. I. Vainshtein, and V. I. Zakharov, “Are All Hadrons
+Alike? ,” Nucl. Phys. B 191 (1981) 301–369.
+[23] E. V. Shuryak, “Pseudoscalar Mesons and Instantons,” Nucl. Phys. B 214 (1983) 237–252.
+[24] Y. Y. Balitsky, V. M. Braun, and A. V. Kolesnichenko, “The decay Sigma+ —> p gamma in
+QCD: Bilocal corrections in a variable magnetic field and the photon wave functions,” Sov.
+J. Nucl. Phys. 48 (1988) 348–357.
+[25] B. Pullin and R. Zwicky, “Radiative Decays of Heavy-light Mesons and the f (T )
+H,H∗,H1 Decay
+Constants,” arXiv:2106.13617 [hep-ph].
+– 18 –
+
+[26] V. A. Nesterenko and A. V. Radyushkin, “Sum Rules and Pion Form-Factor in QCD,” Phys.
+Lett. B 115 (1982) 410.
+[27] M. Kirk, A. Lenz, and T. Rauh, “Dimension-six matrix elements for meson mixing and
+lifetimes from sum rules,” JHEP 12 (2017) 068, arXiv:1711.02100 [hep-ph]. [Erratum:
+JHEP 06, 162 (2020)].
+[28] T. Janowski, B. Pullin, and R. Zwicky, “Charged and neutral Bu,d,s → γ form factors from
+light cone sum rules at NLO,” JHEP 12 (2021) 008, arXiv:2106.13616 [hep-ph].
+[29] Particle Data Group Collaboration, P. A. Zyla et al., “Review of Particle Physics,” PTEP
+2020 no. 8, (2020) 083C01.
+[30] G. Colangelo, S. Lanz, H. Leutwyler, and E. Passemar, “Dispersive analysis of η → 3π,” Eur.
+Phys. J. C 78 no. 11, (2018) 947, arXiv:1807.11937 [hep-ph].
+[31] M. Gell-Mann, R. J. Oakes, and B. Renner, “Behavior of current divergences under SU(3) x
+SU(3),” Phys. Rev. 175 (1968) 2195–2199.
+[32] J. F. Donoghue, E. Golowich, and B. R. Holstein, Dynamics of the standard model, vol. 2.
+CUP, 2014.
+[33] M. A. Shifman, “Quark hadron duality,” in 8th International Symposium on Heavy Flavor
+Physics, vol. 3, pp. 1447–1494. World Scientific, Singapore, 7, 2000. arXiv:hep-ph/0009131.
+[34] I. I. Y. Bigi, M. A. Shifman, and N. Uraltsev, “Aspects of heavy quark theory,” Ann. Rev.
+Nucl. Part. Sci. 47 (1997) 591–661, arXiv:hep-ph/9703290.
+[35] G. S. Bali, F. Bruckmann, M. Constantinou, M. Costa, G. Endrodi, S. D. Katz,
+H. Panagopoulos, and A. Schafer, “Magnetic susceptibility of QCD at zero and at finite
+temperature from the lattice,” Phys. Rev. D 86 (2012) 094512, arXiv:1209.6015
+[hep-lat].
+[36] C. McNeile, A. Bazavov, C. T. H. Davies, R. J. Dowdall, K. Hornbostel, G. P. Lepage, and
+H. D. Trottier, “Direct determination of the strange and light quark condensates from full
+lattice QCD,” Phys. Rev. D 87 no. 3, (2013) 034503, arXiv:1211.6577 [hep-lat].
+[37] B. L. Ioffe, “Condensates in quantum chromodynamics,” Phys. Atom. Nucl. 66 (2003) 30–43,
+arXiv:hep-ph/0207191.
+[38] M. Jamin and B. O. Lange, “fB and fBs from QCD sum rules,” Phys. Rev. D65 (2002)
+056005, arXiv:hep-ph/0108135 [hep-ph].
+[39] P. Ball and R. Zwicky, “SU(3) breaking of leading-twist K and K* distribution amplitudes:
+A Reprise,” Phys. Lett. B 633 (2006) 289–297, arXiv:hep-ph/0510338.
+[40] S. Weinberg, “Precise relations between the spectra of vector and axial vector mesons,”
+Phys. Rev. Lett. 18 (1967) 507–509.
+[41] R. Zwicky, “A brief Introduction to Dispersion Relations and Analyticity,” in Quantum Field
+Theory at the Limits: from Strong Fields to Heavy Quarks. 10, 2016. arXiv:1610.06090
+[hep-ph].
+[42] A. Bharucha, D. M. Straub, and R. Zwicky, “B → V ℓ+ℓ− in the Standard Model from
+light-cone sum rules,” JHEP 08 (2016) 098, arXiv:1503.05534 [hep-ph].
+[43] D. J. Gross, S. B. Treiman, and F. Wilczek, “Light Quark Masses and Isospin Violation,”
+Phys. Rev. D 19 (1979) 2188.
+[44] E. Witten, “Some Inequalities Among Hadron Masses,” Phys. Rev. Lett. 51 (1983) 2351.
+– 19 –
+
+[45] R. F. Dashen, “Chiral SU(3) x SU(3) as a symmetry of the strong interactions,” Phys. Rev.
+183 (1969) 1245–1260.
+[46] J. F. Donoghue, B. R. Holstein, and D. Wyler, “Electromagnetic selfenergies of pseudoscalar
+mesons and Dashen’s theorem,” Phys. Rev. D 47 (1993) 2089–2097.
+[47] Z. Fodor, C. Hoelbling, S. Krieg, L. Lellouch, T. Lippert, A. Portelli, A. Sastre, K. K. Szabo,
+and L. Varnhorst, “Up and down quark masses and corrections to Dashen’s theorem from
+lattice QCD and quenched QED,” Phys. Rev. Lett. 117 no. 8, (2016) 082001,
+arXiv:1604.07112 [hep-lat].
+[48] A. Portelli, “Inclusion of isospin breaking effects in lattice simulations,” PoS
+LATTICE2014 (2015) 013, arXiv:1505.07057 [hep-lat].
+– 20 –
+
diff --git a/bdE4T4oBgHgl3EQfPQyn/content/tmp_files/load_file.txt b/bdE4T4oBgHgl3EQfPQyn/content/tmp_files/load_file.txt
new file mode 100644
index 0000000000000000000000000000000000000000..1f13211215981c6ce547f39c2ece80a94a939028
--- /dev/null
+++ b/bdE4T4oBgHgl3EQfPQyn/content/tmp_files/load_file.txt
@@ -0,0 +1,1088 @@
+filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf,len=1087
+page_content='Prepared for submission to JHEP CERN-TH-2023-005 Isospin Mass Differences of the B, D and K Matthew Rowe,1 Roman Zwicky1,2 1Higgs Centre for Theoretical Physics, School of Physics and Astronomy, University of Edinburgh, Edinburgh EH9 3JZ, Scotland 2Theoretical Physics Department, CERN, Esplanade des Particules 1, Geneva CH-1211, Switzerland E-mail: m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='rowe@sms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='ed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='ac.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='uk, roman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='zwicky@ed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='ac.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='uk Abstract: We compute the electromagnetic mass difference for the B-, D- and K-mesons using QCD sum rules with double dispersion relations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' For the B- and D-mesons we also compute the linear quark mass correction, whereas for the K the standard soft theorems prove more powerful.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' The mass differences, which have not previously been computed via a double dispersion, are fully consistent with experiment, albeit with large uncertainties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Contents 1 Introduction 1 2 Electromagnetic Mass Difference ∆mH|QED from QCD Sum Rules 3 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='1 B- and D-meson with Pseudoscalar Operators 3 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='1 Numerics 5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='2 K-meson with Axial Operators 6 3 Linear Quark Mass Correction ∆mH|mq 8 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='1 QCD Sum Rule Computation of ⟨ ¯H|¯qq| ¯H⟩ for H = B, D 8 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='1 Numerics 9 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='2 SU(3)F estimates of ⟨ ¯H|¯qq| ¯H⟩ for H = B, D 10 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='3 Soft Goldstone estimate of ⟨L|¯qq|L⟩ for L = π, K 10 4 Final Overview and Conclusions 11 A Variants of Quark-Hadron Duality 12 A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='1 Weight function ω(s) = s 13 A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='2 Weight function ω(s) = 1 s−η 14 B Numerical Input 14 B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='1 Decay constants fB, fD and fK 14 arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='04972v1 [hep-ph] 12 Jan 2023 C Self Energies and Condensates for ∆mH|QED 14 C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='1 Perturbation theory 15 C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='2 Condensates 16 D Some Classic Results 16 D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='1 Linear quark mass dependence from Feynman-Hellman theorem 16 D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='2 ∆mπ|QED from soft theorem and Weinberg sum rules 16 1 Introduction The mass difference of charged and neutral hadrons, ∆mH = mH+ − mH0 , H = B, D, K, π, p , (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='1) is an isospin breaking effect and has intrigued particle physicists from the very beginning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' In particular the proton-neutron [1] and the π+-π0 [2] mass difference have been discussed extensively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' At the microscopic level ∆mH is driven by differences in the electric charge and the mass mq of the hadron’s light valence quark q = u, d ∆mB = ∆mB|QED + ∆mB|mq .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='2) The sign and the size depends on the hadron in question and QED stands for quantum electrodynamics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='1,2 Recent lattice Monte Carlo simulations [3, 4] have verified this to a high accuracy, for light and charm mesons, by computing both the charged and the neutral mass and effectively using (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' One may take a different approach and compute the two differences in (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='2) separately by using the second order perturbation theory formula (with H = B for definiteness)3 δmB|QED = −iα 2mB(2π)3 � d4q T (B) µν (q)∆µν(q) + O(α2) , (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='3) with ∆mB|QED ≡ δmB+|QED − δmB0|QED , (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='4) known in the current algebra era [7, 8].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Above ∆µν(q) = 1 q2 (−gµν+(1−ξ) qµqν q2 ) is the photon propagator, α = e2/(4π) the fine structure constant and T (B) µν (q) is the (uncontracted) forward Compton scattering tensor, T (B) µν (q) = i � d4xe−iq·x⟨B|Tjµ(x)jν(0)|B⟩ , (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='5) 1Strictly speaking the separation (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='2) is not well-defined as it requires fixing a (quark mass) renormal- isation scheme e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' [3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' In turn this is a reason for being interested in the problem as, especially light, quark masses cannot be determined to high precision without folding in QED.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' This shows for example in the D-meson results in comparison between [3] and [4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' For our purposes ∆mB|mq is as defined from (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='7).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 2Effects due to the weak force are of O(Λ2 QCD/m2 W ) with respect to QED and are thus negligible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Similar effects are relevant in the context of neutral meson mixing e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='g [5, 6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 3Note that in the literature the notation ∆m2 B ≡ 2mB∆mB is also frequently used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' – 1 – with jα = � q Qq¯qγαq, the electromagnetic current.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' In 1963, Cottingham [9] improved this formula by parameterising it in terms of form factors and relating it to structure functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' That is, by deforming the contour q0 → iq0 and writing a dispersion representation, assessing the number of subtraction terms of the form factors thus allowing him to write the contribution as an integral over Q2 = −q2 ≥ 0 and ν = p · q/mB in the physical region.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' This opened the gate for many phenomenolog- ical studies saturating the dispersion relation by a few terms beyond the elastic one and using high energy constraints.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' This is a formidable task as one requires the knowledge of a correlation function over the entire energy range akin to the situation of the vacuum po- larisation for the anomalous magnetic moment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Some examples are for K, π [10, 11] using chiral perturbation theory (and large Nc), for B and D [12, 13] using heavy quark theory (and large Nc), for the proton-neutron [14] with updated fits to the structure functions and an approach to B, D, K and π using vector meson dominance [15].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Another interesting point, not unrelated, is that (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='3) requires renormalisation [16] and it was argued that it is justified to cut-off the Q2-integral.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Debates about subtraction terms are ongoing cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' [14] and the response [17].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Here we do not follow this phenomenological approach but evaluate (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='5) directly in Minkowski space using double dispersion relation sum rules and thus determine the mass differences from a unified framework (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' same hadronic input).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='4 To the best of our knowledge this has not been done previously with sum rules, presumably due to the subtleties of non gauge-invariant interpolating currents [19, 20].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' For example, in leptonic decays this requires the introduction of a non-local interpolating operator (or an auxiliary scalar field carrying the charge to infinity) for gauge invariance and reproduction of all infrared sensitive logs [20].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' However, in the case at hand this is not necessary, as verified by explicit computation, since ∆mB is an infrared safe quantity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' An efficient and transparent way to implement the first order quark mass corrections is to make use of the Feynman-Helmann theorem which gives m2 B|mq = � q mq⟨B|¯qq|B⟩ , (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='6) as rederived in App.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' For the difference (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='1) this gives ∆mB ��mq = (mu − md) 2mB ⟨B|¯qq|B⟩ + O((mu − md)2) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='7) The matrix element ⟨B|¯qq|B⟩ can be evaluated in the isospin degenerate limit q = u = d since we work to leading order (LO).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' For the B- and the D-meson we compute this matrix element whereas for the Kaon and the pion a soft theorem ⟨π|¯qq|π⟩ = − 2 f2π ⟨0|¯qq|0⟩ + O(m2 π/m2 ρ), with fπ ≈ 131 MeV), due to their pseudo-Goldstone nature, proves more effective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' In principle one could compute all the ∆mB|mq-effects with the QCD analogue of (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='3) but this would be rather inefficient and we further comment in the relevant section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 4This function has been evaluated for the pion on the lattice with good agreement with experiment only very recently using the infinite volume reconstruction method [18].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' – 2 – Another noteworthy aspect is that we were not able to obtain stable sum rules for the pion (cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' The paper is organised as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' In Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 2 the electromagnetic computation is pre- sented, followed by the quark mass correction in Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' We give an overview of the results and the conclusions in Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Comments on quark hadron duality, the numerical input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' some (extra) computation and useful classic results are collected in Apps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' A, C, B and D respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 2 Electromagnetic Mass Difference ∆mH|QED from QCD Sum Rules The electromagnetic mass difference follows from the formula quoted in (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='3) and it is our task to evaluate this.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' The main theoretical challenge is to incorporate the two hadrons for which a non-perturbative method is needed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' We use QCD sum rules [21] with a double dispersion relation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' The first step involves the adaption of an interpolating operator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' For the heavy mesons a pseudoscalar current is suitable and has proven to give good results in many other contexts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' For particles of light quark masses, and Goldstone particles in particular [22], pseudoscalar interpolating operators are unsuitable as they are infested by so-called direct instantons [23].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='5 We therefore discuss the heavy mesons and the K-meson separately in Secs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='1 and 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='2 respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' An important criteria in assessing the validity of our sum rules is the so-called daughter sum rule which we consider worthwhile to present now.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' In the simple single dispersion relation case this criteria reads m2 B(s0, M2) = � s0 cut e−s/M2ρ(s)sds/( � s0 cut e−s/M2ρ(s)ds) , (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='1) where M2 is the Borel parameter, the “cut” marks the onset of physical states, ρ(s) = rBδ(s−m2 B)+.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' is the spectral density and the dots stand for states above the continuum threshold s0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Formally, the residue rB drops out in the ratio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' In practice ρ(s) is a continuous function in partonic computations and Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='1) should be seen as a self-consistency criteria for an s0 in the range of (mB + 2mπ)2 of (mB + 4mπ)2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' If that is the case then Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='1) can be used to fix the central value of s0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='1 B- and D-meson with Pseudoscalar Operators As motivated at the beginning of the section, the default choice for heavy-light 0− meson interpolating operators are JB = m+¯biγ5q , ZB ≡ ⟨ ¯B|JB|0⟩ = m2 BfB , m+ ≡ (mb + mq) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='2) In determining (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='3), one of the main challenges, is that the momenta for the two B-meson is degenerate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' We bypass this problem by introducing an auxiliary momentum r into one 5For the heavy mesons axial interpolating operators are unsuitable because the 1+ states are relatively low, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' for the JP = 0− B-meson with mB ≈ 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='28 GeV there is a 1+ B1(5721) with mB1 ≈ 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='72 GeV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' This is too close to the two pion threshold and even below the typical continuum threshold s0 ≈ (6 GeV)2 assumed for the pseudoscalar operators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' – 3 – b ¯q γ Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Diagrams contributing to the correlation function in (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='3) with the double line repre- senting the b-quark.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' (left) main diagram of the QbQq mixed type.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' (middle) b- and q-quark self energies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' (right) ⟨¯qq⟩-condensate part to b-quark self energy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' There is no corresponding part for the q-quark self energy since ⟨¯bb⟩ is negligibly small.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' For the mass difference only the first one is relevant while the others are useful to obtain stable sum rules as described in the text.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' of the currents and let it flow out at one of the two interpolating operators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Concretely we start from Γqq′(p2, ˜p2) = c i3 � x,y,z,q ei(˜pz−ipy−(q+r)x)⟨0|TJ† B(z)jµ(x)jν(0)JB(y)|0⟩∆µν(q)|QqQq′ = � ∞ 0 ds � ∞ 0 d˜s ρΓqq′(s, ˜s) (s − p2)(˜s − ˜p2) = Z2 Bδqq′mB (m2 B − p2)(m2 B − ˜p2) + .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' , (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='3) with c ≡ −iα 2mB(2π)3 , ˜p = p + r, shorthands xp = x · p, � q,x = � d4qd4x and the density is given by (2πi)2ρΓqq′(s, ˜s) = discs,˜s[Γqq′(s, ˜s)] , (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='4) the double discontinuity with further relevant explanations at the end of the section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' The quantity ∆qq′mB denotes the part proportional to the QqQq′-charges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Of course the aux- iliary momentum r has to disappear from the final result.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' This is achieved by the on-shell condition “˜p2 = p2” and is implemented in practice by treating them equally (p-˜p symme- try) and requiring the daughter sum rule to be satisfied reasonably well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' The QCD sum rule is then given by δqq′mB = 1 Z2 B � ¯δ(a)(m2 +) m2 + ds e (m2 B−s) M2 � ¯δ(a)(s) m2 + d˜s e (m2 B−˜s) M2 ρΓqq′(s, ˜s) , (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='5) where M2 is the Borel parameter from the Borel transformation and the ¯δ(a) is the contin- uum threshold ¯δ(a)(s) = 21/aσ0 � 1 − � s 21/aσ0 �a�1/a , (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='6) which is complicated for double dispersion sum rules [24].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Here it is implemented as in [25] but simplified since the two hadrons are identical implying M2 → 2 ˆ M2 and ˜s0 = ˜t0 = σ(a) 0 21/a (allowing for elimination of those parameters).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' The number σ0 ≈ 35 GeV2 takes on the rˆole of s0 in (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='1) and we shall use the notation s0 ≡ σ0 hereafter for reasons of familiarity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' The parameter a is a model-parameter and the independence of the result is a measure of the quality of the result itself.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Let us turn to the computation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' In perturbation theory there is the diagram connecting the q- to the b-quark and the self energies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' We focus on the former, as it is numerically – 4 – dominant, and present the self energies and the condensate contribution in App.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' The computation can be done analytically and we obtain the following compact result for the density ρΓbq = NcαQqQbm2 + 32π3mB � λ˜λ s˜s � A + B b ln �a + b a − b �� , (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='7) where a = m2 q − 1 4 √ s˜s � s˜s + (m+m−)2� + � q ↔ b � , b = 1 2 � λ˜λ s˜s , A = m2 − , B = � Y ˜Y s˜s + 1 2m2 q √ s˜s(Y + ˜Y ) − 1 4m2 − � s + ˜s + 4mbmq + 2m2 q � − 1 4m2 + √ s˜s � + � q ↔ b � , with further abbreviations m± = mb ± mq , λ = λ(s, m2 b, m2 q) , Y = s − m+m− 2s , (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='8) λ(x, y, z) = x2 +y2 +z2 −2xy −2xz −2yz is the K¨all´en function and in the tilde quantities ˜Y and ˜λ we have s → ˜s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' A few words about the computation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' We have taken the discontinuity in (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='4) using Cutkosky rules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' A crucial point is that we do not cut the photon propagator as this would be a QED correction to the B-meson state and does not contribute to (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' This amends the meaning of (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Let us turn to the usage of the auxiliary momentum r in the context of double dis- persion sum rules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' First we note that this is different to a form factor computation, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' F π→π(q2) [26], where the momentum transfer naturally takes on the rˆole of this variable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' It is closer to ∆F = 2 matrix elements as there is no momentum transfer but the flavour contractions naturally lead to a symmetric configuration (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' [27]) which is more straight- forward.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' In fact since our procedure (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='3) artificially breaks the bq-symmetry, a and B turn out to be non-symmetric whereas b and A remain symmetric.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' This has to be remedied by the following substitution a → 1 2(a + a|b ↔ q) , B → 1 2(B + B|b ↔ q) , (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='9) which is apparent from the way the Cutkosky cuts work out.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' We have performed the computation in general gauge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Of course Γqq′ is gauge dependent but as stated earlier its discontinuity in the bq-quark lines are not.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' This is the case since the particles are put on the mass shell and it is important that the quantity is infrared safe.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Otherwise, as previously stated, one needs to introduce extra machinery [20].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='1 Numerics Our numerics have three cornerstones, the hadronic input parameters in Tab.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 2, the daugh- ter sum rule (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='1) and the choice of a mass scheme for mb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Whereas there is nothing to say about point one, the others are in need of some explanation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' We start with the B-meson case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' The daughter sum rule constrains the sum rule parameters: the continuum thresh- old s0 and the Borel parameter M2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Additional constraints, defining the Borel window, – 5 – are the convergence of the condensate expansion and keeping the B-pole term dominant versus the continuum contribution [21].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Let us turn to the question of the mass scheme which is not independent of the second point.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' We consider the pole-, the kinetic- and the MS-scheme.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' In the pole scheme the b, c-quark self energy contributions (perturbative and condensate, diagrams 2 and 4 in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 1) vanish and the sum rules are not stable, that is no Borel window, and we therefore discard it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' For the MS-scheme the b-quark self energies are dominant with the b-q contribution comparable to the condensates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Since these contri- butions cancel in the observable ∆m, this scheme is not ideal either and we therefore drop it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Hence we are left with the kinetic scheme for the b-quark which shows good properties as for the B → γ form factor [28] and the gBB∗γ-couplings [25].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' For the c-quark the self energies are not dominant and we use the MS-scheme, also because the kinetic-scheme has proven unsuitable in for gDD∗γ [25].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' As stated above the daughter sum rule (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='1) is used to fix s0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' For that purpose it is instructive to define the normalised ratio U(s0, M2) ≡ 1 m2 B m2 B(s0, M2) , (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='10) of the sum rule value over the experimental one which has to be close to unity for self- consistency of the approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' This leads to {s0, ˆ M2}B = {35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='2(1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='0), 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='6(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='5)} GeV2 , {s0, ˆ M2}D = {5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='5(1), 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='0(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='25)} GeV2 , (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='11) for which U(s0 ± 1 GeV2, M2)∆mB|QED = 1 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='01 , U(s0 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='1 GeV2, M2)∆mD|QED = 1 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='01 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Using the input parameters in Tab.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 2 (with mkin b (1 GeV), ¯mc( ¯mc)) and the fB,D sum rule to LO (cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' App.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='1) for the ZB-factor we get ∆mB|QED = +1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='58+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='26 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='23 MeV , ∆mD|QED = +2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='25+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='89 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='52 MeV , (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='12) where the error is obtained by adding the individual errors in quadrature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' The dominant error is due to the heavy quark mass mb(c) (50-60%).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' The Borel mass M2 and duality parameters a each contribute a 20-25% uncertainty.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' The error in a is quantified by taking the standard deviation of the results with a ∈ [ 1 2, 1, 2, ∞].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' The errors for the D-meson are larger reflecting the generically inferior quality of the sum rule.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='2 K-meson with Axial Operators As explained at the beginning of this section pseudo Goldstone bosons cannot be interpo- lated by pseudoscalar operators and one therefore resorts to axial ones Aµ = ¯q γµγ5 s , ⟨0|Aµ|K(p)⟩ = ipµfK .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='13) The correlation function corresponding to (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='3) assumes the form Γαβ qq′(p2, ˜p2) = ci3 � q � x,y,z ei(˜pz−py−(q+r)x)⟨0|TAα(z)jµ(x)jν(0)A† β(y)|0⟩∆µν(q)|QqQ′q – 6 – = gαβΓ(0) qq′ + pαpβΓ(2) qq′ + O(r) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' , (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='14) where the O(r)-terms are not of interest to us.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' The decisive information is in the pαpβ-term which takes on the form Γ(2) qq′ = f2 Kδqq′m (m2 K − p2)(m2 K − ˜p2) + .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' , (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='15) in a hadronic representation where the dots represent higher states in the spectrum (which includes the K∗-meson in this case).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Let us turn to the computation which involves some practical matters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Computing the double discontinuity of Γ(2) qq′ is laborious as there are open Lorentz indices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' One may though obtain the same information from a linear combination of (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='3) and (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='14) with contracted indices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' It follows from Ward identities that (d = 4) Γ(2)(s, s) = 1 s2(1 − d) (sΓα α(s, s) − d Γ(s, s))) , (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='16) where we omitted the qq′-subscript for brevity and have set s = ˜s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' The generalisation to the s ̸= ˜s is in principle ambiguous but fortunately the differences are not that sizeable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Concretely we use Γ(2)(s, ˜s) = 1 s˜s(1 − d) �1 2(s + ˜s)Γα α(s, ˜s) − d Γ(s, ˜s)) � , (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='17) and the analogous expression of (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='7) is lengthy for the Kaon and is given in a Mathematica ancillary notebook attached to the arXiv version.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Changing the prescription (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='17) by 1 2(s + ˜s) → √ s˜s results in a 15%-change which is sizeable but not extremely large and well within the error.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' In addition we use a weight function 1/s˜s as described in App.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='2 as otherwise the daughter sum rule is off by at least a factor of two which is very large in view of how well it works in all other cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Proceeding as before we obtain the following values {s0, ˆ M2}K = {0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='7(1), 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='95(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='5)} GeV2 , U(s0 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='1, M2)∆mK|QED = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='00 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='10 , (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='18) for the sum rule parameters and the daughter sum rule (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='10).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Using the input parameters in Tab.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 2, the fK sum rule to LO (cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' App.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='1) and (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='18) we get ∆mK|QED = +1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='85+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='42 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='66 MeV .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='19) Scale dependent quantities are evaluated at µ = 2 GeV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' The uncertainty again comes from adding individual errors in quadrature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' The dominant uncertainty (75%) comes from the ms mass with the remaining uncertainty due to the the duality parameter a in (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' As stated in the introduction, the pion proved more difficult.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' That is we were not able to find stable sum rules satisfying the daughter sum rule for reasonable values of the con- tinuum threshold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='6 We believe that is due to its small mass mπ which is considerably below the other hadronic masses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Conversely the Kaon mass, while being a pseudo-Goldstone, is much closer to the other hadrons (due to ms being close to ΛQCD).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 6The extra disconnected diagram for the π0, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' [18], is small since the γ5 generates a Levi-Civita tensor which enforces two extra loops.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' This is reflected in the smallness of the lattice result [18] and also by the fact that the LO chiral Lagrangian does not contribute to π0 (cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' App.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' – 7 – 3 Linear Quark Mass Correction ∆mH|mq As stated in the introduction (and cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' App.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='1) the O(mq)-corrections are governed by ⟨H|¯qq|H⟩ (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='7).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' For the B, D-meson we compute this matrix element from QCD sum rules in Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='1, using similar techniques as for the QED correction, and for light mesons we resort to soft theorems cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='3 as the corresponding sum rules are inferior.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='1 QCD Sum Rule Computation of ⟨ ¯H|¯qq| ¯H⟩ for H = B, D In order to anticipate the hierarchy of diagrams shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 2 it is worthwhile to con- template on the heavy quark behaviour.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' The matrix element scales like (H = B) for definiteness).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' ⟨B|¯qq|B⟩ = O(mb) , (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='1) for relativistically normalised states, ⟨B(p)|B(q)⟩ = 2EB(⃗p)(2π)3δ(3)(⃗p − ⃗q), due to the factor EB = O(mb).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' On the one hand, the operator ¯qq demands a chirality flip in pertur- bation theory and this cannot come from the mb-mass since the latter is entirely kinematic as we have just established.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' On the other hand the condensate contribution itself ⟨¯qq⟩ does not require this flip and is therefore unsuppressed and numerically leading.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' b ¯q g Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Diagrams contributing to the matrix element ⟨B|¯qq|B⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' They are analogous to the ones in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 1 but the square blob denotes the insertion of the ¯qq-operator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Perturbation theory is minimal and the quark condensate diagram is the main contribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' The mixed condensate diagrams ⟨¯qGq⟩ are mainly useful to stabilise the sum rule.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' To do the computation we start from the following correlation function Π(p2, ˜p2, r) = i2 � y,z ei(˜pz−py−xr)⟨0|TJ† B(z)(¯qq)(x)JB(y)|0⟩ , (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='2) where JB has been defined in (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='2) and the auxiliary momentum r takes on the same rˆole as before.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' The double dispersion relation of the correlation functions reads Π(p2, ˜p2, r) = � dsd˜s ρΠ(s, ˜s) (s − p2 − i0)(˜s − ˜p2 − i0) = Z2 B⟨ ¯B|¯qq| ¯B⟩ (m2 B − p2)(m2 B − ˜p2) + .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='3) with (2πi)2ρΠ(s, ˜s) = discs,˜s[Π(s, ˜s)], and the matrix element is then given by ⟨ ¯B|¯qq| ¯B⟩ = 1 Z2 B � ¯δ(a)(m2 +) m2 + ds e (m2 B−s) M2 � ¯δ(a)(s) m2 + d˜s e (m2 B−˜s) M2 ρΠ(s, ˜s) , (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='4) with ¯δ(a) defined in (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' The three contributions depicted in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 2 are described below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' – 8 – Perturbation theory is given by ρΠ(s, ˜s) = m2 +Ncmq 2π2 s − (mb − mq)2 s + m2q − m2 b λ 1 2 δ(˜s − s) , (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='5) with the anticipated O(mq)-suppression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' This term is negligible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' The ⟨¯qq⟩ condensate evaluates to ⟨ ¯B|¯qq| ¯B⟩ = −4m2 +m2 b⟨¯qq⟩ Z2 B e 2(m2 B−m2 b) M2 , (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='6) which is not suppressed by O(mq) and thus dominant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' The mixed condensate yields ⟨ ¯B|¯qq| ¯B⟩ = − m2 +⟨¯qσsggGq⟩ Z2 B e 2(m2 B−m2 b) M2 � (1 − 3m2 b M2 ) + (5 8 + 2m2 b M2 − 4m4 b M4 ) � , (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='7) which is not suppressed either as it is in the same chirality representation as the quark condensate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' The first and second term in round brackets are from the third and fourth diagram in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' We consider it worthwhile to comment how the lack of mq-suppression in the condensate contribution arises.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Its origin is the propagator 1/(r2 − m2 q + iϵ) (we work in the ⃗r = 0 frame) r2 − m2 q + iϵ = (√s − ( √ ˜s + mq − iϵ′))(√s − ( √ ˜s − mq + iϵ′)) , (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='8) which when cut gives a term of the form √s mq δ(s − ( √ ˜s + mq)2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' The 1/mq thus removes the O(mq)-suppression in the numerator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Numerically perturbation is entirely negligible and this is also the reason for not including the gluon condensate which is expected to be further suppressed O(Λ4 QCD/M 4) as compared to perturbation theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='1 Numerics The basic procedure for the numerics is the same as described in Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' However, the choice of scheme is not as important in this case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Any of the schemes, pole, kinetic and MS give similar results and indicate stability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' The situation is certainly clearer with respect to the mb-mass itself as the matrix element is O(mb) (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='1) and ∆mB|mq itself is O(m0 b) whereas ∆mB|QED is computed from a non-local correlation function where the mb- dependence is more difficult to track.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Since the perturbative contribution is suppressed, there is no s0 dependence (there would be at NLO in αs).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Hence we can fix the Borel value M2 to satisfy the daughter sum rule (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='10), obtaining the following sum rule parameters {s0, ˆ M2}B = {35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='0, 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='0} GeV2 , {s0, ˆ M2}D = {6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='0, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='75} GeV2 , (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='9) and daughter sum rules U(s0, ˆ M2 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='15 GeV)∆mB|mq = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='00+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='03 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='02 , – 9 – U(s0, ˆ M2 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='05 GeV)∆mD|mq = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='00+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='20 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='12 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='10) Using the input parameters in Tab.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 2 (with mkin b (1 GeV), ¯mc( ¯mc)), the fB,D sum rule to LO (cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' App.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='1) and (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='9) we get ⟨ ¯B|¯qq| ¯B⟩µ=1 GeV = 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='99+1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='99 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='41 GeV , ⟨ ¯D|¯qq| ¯D⟩µ= ¯mc GeV = 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='40+1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='78 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='71 GeV , (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='11) for the matrix elements and ∆mB|mq = −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='88+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='49 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='71 MeV , ∆mD|mq = +2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='68+1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='48 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='38 MeV , (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='12) for the mass differences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' As this is a LO computation the errors are large, primarily coming from M2 with a small contribution (20%) from the light quark masses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Note that the set value of M2 is not independent of higher order αs corrections.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' For the D-meson especially, the convergence of the sum rule is not good.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' This is reflected in the mixed condensate contributing a sizeable 20%-uncertainty.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='2 SU(3)F estimates of ⟨ ¯H|¯qq| ¯H⟩ for H = B, D Alternatively, one may use SU(3)F flavour symmetry ⟨B|¯qq|B⟩ ≈ ⟨Bs|¯ss|Bs⟩ to estimate ⟨B|¯qq|B⟩ [12].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Following this analysis one may write (mud ≡ 1 2(mu + md)) (2m2 Bs − m2 B+ − m2 B0) = 2(ms − mud)⟨B|¯qq|B⟩ , (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='13) from which ⟨B|¯qq|B⟩ ≈ m2 Bs − m2 B (ms − mud) , (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='14) follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Employing the input from the PDG [29] this leads to7 ∆mB|mq = −2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='37+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='35 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='43 ± 20%SU3 MeV , ∆mD|mq = +2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='81+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='51 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='41 ± 20%SU3 MeV .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='16) We have added a characteristic 20% SU(3)F -violation due to the use of the ⟨B|¯qq|B⟩ ≈ ⟨Bs|¯ss|Bs⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' The result are well compatible with (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='12) and we shall not use them any further.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Note that in the heavy quark limit we have ∆mB|mq = −∆mD|mq since the c and b are up and down quark types respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' This heavy quark limit relation holds reasonably as already observed in [12] (with slightly different input).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='3 Soft Goldstone estimate of ⟨L|¯qq|L⟩ for L = π, K The matrix elements ⟨L|¯qq|L⟩ where L = π, K is a pseudo-Goldstone boson may be es- timated using soft-pion techniques which in this case lead to the famous GMOR-relation [31].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Concretely [32] m2 π+,0 = (mu + md)B0 , m2 K+ = (mu + ms)B0 , m2 K0 = (md + ms)B0 , (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='17) 7Or taking the η → 3π analysis [30], which in this case makes a difference, results in ∆mB|mq = −2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='54+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='17 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='18 ± 20%SU3 MeV , ∆mD|mq = +3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='01+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='21 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='20 ± 20%SU3 MeV , (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='15) a more precise result.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' – 10 – which are to first order in the quark masses, with no QED corrections and the constant is B0 = − 2⟨¯qq⟩ f2π ≈ 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='26 GeV at µ = 2 GeV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' We see that for the pions there is no difference to linear order which is a consequence of isospin [10].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' The pion mass splitting is a ∆I = 2 isospin effect since the relevant matrix element has two pion states where the quark masses themselves are of ∆I = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Hence it takes at least two powers of the quark mass difference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Fortunately, the latter follows in a straightforward manner from chiral perturbation theory and one obtains to LO ∆mK|mq = mu − md ms − mud m2 K − m2 π 2mK = mu − md 2mud m2 π 2mK = − 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='74+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='98 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='21 MeV , ∆mπ|mq = 1 16 md − mu ms − mud md − mu mud mπ = + 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='16+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='06 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='05 MeV , (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='18) using the values from the PDG [29].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' As expected the pion contribution is rather small as a result of being second order in the quark mass difference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' It is noteworthy that one obtains ∆mK|mq ≈ −5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='7 MeV when using (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='17) directly which can be seen as a SU(3)F correction which is well covered by the quoted uncertainty.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 4 Final Overview and Conclusions In this paper we have computed the mass difference of the charged and neutral B-, D- and K-mesons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' The results, which originate from electromagnetic and quark mass effects, are summarised and contrasted with experimental values in Tab.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' The electromagnetic contribution is computed from the second order formula (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='3) in Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 2 and may be regarded as the core part of this paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' ∆mπ|QED is taken from a soft-pion theorem (cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' App.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='2) for completeness and comparison.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Quark mass effects are obtained from the Feynman- Hellman formula (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='7) and its corresponding matrix element is computed in Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='1 for the B and the D respectively whereas for the K and the π a soft theorem turns out to be more reliable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' The results obtained are consistent with the current experimental values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' The uncer- tainties are above 20% and indeed more cannot be expected from a double dispersion sum rule at leading order in the strong coupling constant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Experimental uncertainties are one or two orders of magnitude lower.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' The values in Tab.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 1 deserves some comments as they are not easily guessed by rules of thumb by a practitioner in non-perturbative QCD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' The parametric estimate of ∆mH|QED = c Qeff H α πΛQCD with ΛQCD = 200 MeV and Qeff D = 2Qeff B,K = 2/3, leads to c ≈ 10-20 which is a rather large number.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' To put this into perspective, one should keep in mind that these kind of estimates are not straightforward as the mass difference is obtained from a non-local (long distance) correlation function (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' The scale for the quark mass effect is of course set me mu−md ≈ 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='5 MeV and its sign depends on whether the non q = u, d quark is of the up (charm) or down (beauty, strange) type quark.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' The cancellation to almost an order of magnitude of the electric and the quark mass contribution for the B-meson is remarkable, leading to an inflated uncertainty in ∆mB.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' The main aim of this paper was to show that it is possible to understand the isospin mass difference from QCD sum rules, that is to obtain values compatible with experiment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' – 11 – H ∆mH|QED ∆mH|mq ∆mH ∆mH|PDG[29] B +1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='58(24) MeV −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='88(60) MeV a −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='30(65) MeV −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='32(5) MeV D +2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='25(70) MeV +2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='7(1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='4) MeV a +4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='9(1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='6) MeV +4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='822(15) MeV K +1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='85(54) MeV − 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='7(1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='1) MeV b −4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='9(1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='2) MeV −3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='934(20) MeV π +4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='8(1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='2) MeV c +0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='16(5) MeV b +5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='0(1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='2) MeV +4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='5936(5) MeV Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Our values of ∆mH due to the electromagnetic mass difference and the quark masses compared to the PDG values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' The entries marked with a are obtained from the ⟨H|¯qq|H⟩ matrix element in conjunction with the Feynman-Hellman theorem (valid to LO in mq).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' The values in italic should not be regarded as predictions of this work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' bderived from the soft theorem for (pseudo-) Goldstone bosons (cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' App.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='3) and cresults from soft theorem in conjunction with the Weinberg sum rules (cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' App.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' It is noteworthy that ∆mπ|mq = O((mu − md)2) which explains its smallness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' For comparison some lattice values ∆mD = 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='47(53) MeV and ∆mK = −4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='07(15)(15) MeV [4] and ∆mD = 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='68(10)(13) MeV [3] which are of course more precise as the lattice is suited for mass determination, even in the presence of QED, and due to the full inclusion of QCD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' The sum rule computation could be improved by including radiative corrections in the strong coupling constant which would be a formidable task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Perhaps more interestingly, the formalism developed in this paper could be applied to baryons to obtain the proton- neutron mass difference for instance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Acknowledgments RZ is supported by a CERN associateship and an STFC Consolidated Grant, ST/P0000630/1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' We are grateful to Michele Della Morte, Antonin Portelli and Max Hanson for informative comments on the lattice literature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' A Variants of Quark-Hadron Duality In this appendix we elaborate on variations of quark-hadron duality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' This is best explained by example.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Consider the axial correlator in connection with the K Παβ = i � d4xeipx⟨0|TA† α(x)Aβ(0)|0⟩ = pαpβΠ(p2) + gαβ ˆΠ(p2) , (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='1) with Aβ defined in (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='13).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' The Kaon appears in the first structure Π(p2) = f2 K m2 K − p2 + .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' , (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='2) where the dots stand for higher states as usual.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' QCD sum rules consists of two steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Firstly the observation that Π(p2) ≈ Π(p2)pQCD , (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='3) for some p2 outside the physical region (could be p2 < 0), where pQCD stands for per- turbative QCD with OPE improvements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' In a second step one rewrites Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='3) as a – 12 – dispersion relation followed by a Borel transform under which (s − p2)−1 → exp � −s/M2� (M2 is the Borel parameter) which results in � ∞ 0 e−s/M2ρ(s) ≈ � ∞ 0 e−s/M2ρpQCD(s) , (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='4) with ρ(s) = 1 2πidiscsΠ(s) = f2 Kδ(s − m2 K) + .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' and the pQCD part is defined analogously.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' The one assumption is then that this integral can be broken up as follows � s0 0 e−s/M2ρ(s) ≈ � s0 0 e−s/M2ρpQCD(s) , (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='5) and (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='5) is sometimes referred to as semi-global quark hadron duality [33].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' One way to determine s0 is to impose the daughter sum rule (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='1) and then for consistency with the duality assumption s0 ought to be somewhere between (mK + 2mπ)2 and (mK + 4mπ)2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' We want to briefly contemplate for which types of weight functions ω(s) (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='5) � s0 0 e−s/M2ρ(s)ω(s) ≈ � s0 0 e−s/M2ρpQCD(s)ω(s) , (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='6) with corresponding (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='1) m2 B = � s0 cut e−s/M2ρpQCD(s)ω(s) s ds/( � s0 cut e−s/M2ρpQCD(s)ω(s)ds) , (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='7) can hold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' The crucial point is to be able to justify the analogue of Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='1 Weight function ω(s) = s We might start by rewriting the pαpβ-part in (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='1) as follows pαpβΠ(p2) = pαpβ p2 (p2Π(p2)) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='8) For the pQCD part one may directly write ρpQCD(s) → sρpQCD(s) since p2 does not lead to new singularities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Using (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='2), the QCD part can be written as (p2Π(p2)) = p2 f2 K m2 K − p2 + · · · = −f2 K + m2 K f2 K m2 K − p2 + .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' , (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='9) where −f2 K is a constant that will disappear under Borel transformation and thus ρ(s) → sρ(s) works the very same way.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' The analogue of (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='3) can be justified in this case by re- placing A† α(x) → −∂2A† α(x) (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='8 Weight functions of polynomials are generally referred to as moments and are familiar to the community e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' moments in b → cℓν for example [34].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' It is quite clear that one can not take arbitrarily high powers of moments as then duality will be challenged since smoothness is lost.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 8In our case this is not trivial as A† α is not QED gauge invariant but it can still be used at LO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' In the general case this requires more thought.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' – 13 – A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='2 Weight function ω(s) = 1 s−η Choosing a weight function ω(s) = 1 s − η , (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='10) is equivalent to working with a subtracted dispersion relation fo the form Π(p2) − Π(η) p2 − η = � dsρ(s) (s − p2)(s − η) + c , (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='11) where c = − � dsρA(s)/(s(s − η)) + Π ′(η) is a subtraction constant such that the limit p2 → 0 comes out correctly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' The constant c is though not important in the end as it vanishes under Borel transformation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' The question of whether one can use (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='10) then turns into the question whether the left hand side can be computed reliably.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' In our application to Kaons we have chosen η = 0 which is close but still below the Kaon resonance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' We have checked that for the fK sum rule with s0 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='7 GeV2 the agreement is reasonable and this serves at least as a partial justification of the procedure in Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' B Numerical Input The numerical QCD input is summarised in Tab.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 2 and below we give the numerical values of the the decay constant from sum rule which are the effective LSZ factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='1 Decay constants fB, fD and fK The extraction of both the QED mass shifts and the linear quark mass corrections, require values for the decay constants fB, fD and fK.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Note that, for consistency with the rest of this paper these are evaluated at LO in QCD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' The LO expressions for the pseudoscalar (B, D) and axial (K) correlators are well known (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' [38, 39]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' The following values fB = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='157 GeV , {s0, M2} = {33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='5, 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='0} GeV2 , fD = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='158 GeV , {s0, M2} = {5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='7, 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='0} GeV2 , fK = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='147 GeV , {s0, M2} = {1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='1, 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='5} GeV2 , (B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='1) are obtained.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' C Self Energies and Condensates for ∆mH|QED In this appendix we present some extra computations: the self energies and condensate contributions to ∆mB|QED.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' These are important for stabilising the sum rules but do not affect the actual value of ∆mB|QED per se.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' This is the case since graphs proportional to Q2 b are cancelled in the mass difference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' The only non-zero graph contributing to the mass shift is the q-q self energy, but it is numerically negligible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' We wish to note that in all these graphs explicit gauge independence has been verified to hold after the double-cut is taken.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' – 14 – JP = 0− Meson masses [29] mB mBs mD mDs mK mπ 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='280 GeV 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='367 GeV 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='867 GeV 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='968 GeV 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='496 GeV 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='137 GeV JP = 0− Mass Differences [29] ∆mB ∆mD ∆mK ∆mπ −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='32(5) MeV +4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='822(15) MeV −3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='934(20) MeV +4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='5936(5) MeV Quark masses [29] ¯mb(mb) ¯mc(mc) mpole b mpole c mkin b |1GeV mkin c |1GeV 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='18+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='03 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='02 GeV 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='27(2) GeV 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='78(6) GeV 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='67(7) GeV 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='53(6) GeV 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='13(5) ¯ms|2GeV ¯md|2GeV ¯mu|2GeV ¯mud|2GeV ¯mu ¯md ¯ms ¯mud 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='4+8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='6 −3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='4 MeV 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='67+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='48 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='17 MeV 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='16+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='49 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='26 MeV 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='45+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='35 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='15 MeV 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='474+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='056 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='074 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='33+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='67 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='77 Condensates ⟨¯qq⟩|2GeV [35] ⟨¯ss⟩|2GeV [36] m2 0 [37] ⟨0| α πG2|0⟩ [21] −(269(2) MeV)3 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='08(16) ⟨¯qq⟩ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='8(2) GeV2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='012(4) GeV4 Table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Summary of input parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Note as inputs into the sum rules we use mH = mH−, as which has a completely negligible impact.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' The quantity mud ≡ 1 2(mu + md) is the light quark average.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' The mixed condensate is parameterised as ⟨¯qσsggGq⟩ = m2 0⟨¯qq⟩ as is standard in the literature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='1 Perturbation theory The perturbative b-b self energy graph, after mass renormalisation, takes on the form ρΓbb(s, ˜s) = Ncm2 +Q2 bα 32π3mB λ 1 2 · s − m2 − s + m+m− fR(m2 b)δ(˜s − s) , (C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='1) with the renormalised fR9 fR(m2) = f(m2) + 32π2m2 e2 δZm = � � � � � � � � � � � 2m2 � 4 + 3 ln µ2 m2 � , MS 0, Pole 2m2 � 16µ 3m + 2µ2 m2 � , Kinetic (C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='2) f(m2) = 4m2B0(m2, 0, m2) + (d − 2)A0(m2) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' (C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='3) The functions A0 and B0 are the standard Passarino-Veltman functions with (FeynCalc) normalisation (2πµ)2ϵ � ddk /(iπ2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Explicitly these are B0(m2, 0, m2) = 1 ˆϵ + 2 + log � µ2 m2 � , A0(m2) = m2 �1 ˆϵ + 1 + log � µ2 m2 �� , (C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='4) with 1 ˆϵ = 1 ϵ − γE + log 4π.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' The q-q graph can be obtained by replacing b → q in the result and since it is O(m2 q) it is negligible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 9Note that the vanishing in the pole scheme is clear, by the very definition of the scheme, since we are on-shell after the cuts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' – 15 – C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='2 Condensates The only relevant condensate graph is given in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 1 (4th diagram).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' With mq → 0 the density is ρ⟨¯qq⟩ Γbb = −m2 bαQ2 b 8πmB mb⟨¯qq⟩δ(s − m2 b)δ(˜s − m2 b)fR(m2 b) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' (C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='5) Light quark mass corrections come from Taylor expanding the quark fields, leading to derivatives of δ-functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' It is thus more convenient to directly display the resulting mass shift ∆mB|⟨¯qq⟩ = − m2 +αQ2 b 8πmBZ2 B e 2(m2 B−m2 b) M2 ⟨¯qq⟩ � mb − mq 4 � 1 + 4m2 b M2 �� fR(m2 b) (C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='6) The ⟨¯qq⟩ condensate graph where the photon connects the b and the q-quark is not of short distance type (it leads to 1/m2 q in the propagator) and is therefore omitted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' This is similar to the B → γ form factor although in that case the physics is covered by the photon distribution amplitude (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' [28]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' D Some Classic Results In this appendix we summarise some classic results which are of use and referred to in the paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='1 Linear quark mass dependence from Feynman-Hellman theorem In order to derive the Feynman-Hellman theorem it is convenient to use states ⟨ ˆB(p)| ˆB(q)⟩ = (2π)3δ(3)(⃗p−⃗q) normalised in a non-relativistic manner (the translation to the usual states is | ˆB⟩ = |B⟩/√2EB).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Taking the derivative of ⟨ ˆB|H| ˆB⟩ (using ∂mq⟨ ˆB(p)| ˆB(q)⟩ = 0) one obtains mq∂mqEB = mq⟨ ˆB|¯qq| ˆB⟩ , (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='1) which is equivalent to mq∂mq2E2 B = 2mq⟨B|¯qq|B⟩ , (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='2) which in turn is consistent with m2 B|mq = � q mq⟨B|¯qq|B⟩ , (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='3) since the momenta are independent of the mass.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' This is the relation quoted in (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='6) in the main text.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='2 ∆mπ|QED from soft theorem and Weinberg sum rules Using soft-pion techniques it was shown that [2] ∆mπ|QED = 3α 8πmπf2π � ∞ 0 dss ln µ2 s (ρV (s) − ρA(s)) + O(m2 π/m2 ρ) , (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='4) where ρV = fρδ(s − m2 ρ) + .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' is the spectral density of the vector triplet current and ρA is the analogous quantity for the axial case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' The ln s-term originates from integrating over – 16 – the photon momentum d4q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' We refer the reader to [10] for an improved treatment using chiral perturbation theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' In fact, as is the case for all soft-pion results, Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='4) follows from the LO electromagnetic term in the Lagrangian and can therefore be systematically improved beyond the soft limit to the extent that its low energy constants (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' couplings) are known.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Using the Weinberg sum rules [40], which are phenomenologically successful, a good estimate was obtained [2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Taking the equations resulting from the so-called first and second Weinberg sum rule in [41], then f2 ρ = f2 a1 + f2 π , m2 ρf2 ρ = m2 a1f2 a1 , (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='5) (where the chiral limit mq = 0 is assumed).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Moreover, the spectral functions are trun- cated after the first vector meson resonances ρ and a1 which can be justified as the chiral symmetry is restored at high energy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Using these in expressions in (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='4) one gets ∆mπ|QED = 3α 8π m2 ρf2 ρ m2πf2π mπ ln f2 ρ f2ρ − f2π ≈ 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='8 MeV , (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='6) for fπ = 131 MeV, mρ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='77 MeV [29] and fρ = 215 MeV [42].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Since the quark mass effect is small O((mu −md)2) (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='18), one has ∆mπ ≈ ∆mπ|QED which is rather close to the experimental value ∆mπ = +4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='5936(5) MeV [29].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Clearly (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='6) is a crude approximation as more detailed analyses [10, 43] including finite width effects yields a result which is ca +1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='2 MeV larger [43].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' We therefore assign an uncertainty of this amount to ∆mπ|QED in Tab.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' It is also worthwhile to mention two other interesting aspects in conjunction with ∆mπ|QED.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' First, by using by using QCD inequalities it has been shown that ∆mπ|QED ≥ 0 [44] which is of course well satisfied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Second Dashen’s theorem [45] states that ∆m2 π|QED − ∆m2 K|QED = O(αms, αmq ln mq) as a result of degeneracy in the SU(3)F limit ms = md = mu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' The corrections seem rather large and are largely kinematic, the larger K mass in the Kaon propagator [46].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Lattice Monte Carlo simulations have settled this matter to large precision [47] (cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' [48] for a review).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' References [1] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Zee, “The Proton - neutron mass difference problem and related topics,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Rept.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 3 (1972) 127–192.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' [2] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Das, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Guralnik, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Mathur, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Low, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Young, “Electromagnetic mass difference of pions,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 18 (1967) 759–761.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' [3] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Borsanyi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=', “Ab initio calculation of the neutron-proton mass difference,” Science 347 (2015) 1452–1455, arXiv:1406.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='4088 [hep-lat].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' [4] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Giusti, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Lubicz, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Tarantino, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Martinelli, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Sanfilippo, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Simula, and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Tantalo, “Leading isospin-breaking corrections to pion, kaon and charmed-meson masses with Twisted-Mass fermions,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' D 95 no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 11, (2017) 114504, arXiv:1704.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='06561 [hep-lat].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' [5] I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Bigi and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Sanda, CP violation, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Cambridge University Press, 9, 2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' – 17 – [6] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Branco, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Lavoura, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Silva, CP Violation, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 103.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 1999.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' [7] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Feynman and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Speisman, “Proton-Neutron Mass Difference,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 94 no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 2, (1954) 500.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' [8] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Cini, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Ferrari, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Gatto, “Neutron-Proton Mass Difference by Dispersion Theory,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 2 no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 1, (1959) 7–9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' [9] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Cottingham, “The neutron proton mass difference and electron scattering experiments,” Annals Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 25 (1963) 424–432.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' [10] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Donoghue and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Perez, “The Electromagnetic mass differences of pions and kaons,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' D 55 (1997) 7075–7092, arXiv:hep-ph/9611331.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' [11] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Bardeen, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Bijnens, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Gerard, “Hadronic Matrix Elements and the pi+ pi0 Mass Difference,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 62 (1989) 1343.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' [12] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Colangelo, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Ladisa, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Nardulli, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Pham, “Electromagnetic mass difference of heavy mesons,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' B 416 (1998) 208–215, arXiv:hep-ph/9709201.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' [13] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Luty and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Sundrum, “Heavy meson electromagnetic mass differences from QCD,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' D 52 (1995) 1627–1638, arXiv:hep-ph/9502259.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' [14] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Walker-Loud, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Carlson, and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Miller, “The Electromagnetic Self-Energy Contribution to Mp − Mn and the Isovector Nucleon MagneticPolarizability,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 108 (2012) 232301, arXiv:1203.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='0254 [nucl-th].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' [15] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Hambye, “A Unified treatment of mass differences for light and heavy pseudoscalars,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' B 319 (1993) 300–306.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' [16] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Collins, “Renormalization of the Cottingham Formula,” Nucl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' B 149 (1979) 90–100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' [Erratum: Nucl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='B 153, 546 (1979), Erratum: Nucl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='B 915, 392–393 (2017)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' [17] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Gasser, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Hoferichter, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Leutwyler, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Rusetsky, “Cottingham formula and nucleon polarisabilities,” Eur.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' C 75 no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 8, (2015) 375, arXiv:1506.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='06747 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' [Erratum: Eur.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='C 80, 353 (2020)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' [18] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Feng, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Jin, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Riberdy, “Lattice QCD Calculation of the Pion Mass Splitting,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 128 no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 5, (2022) 052003, arXiv:2108.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='05311 [hep-lat].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' [19] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Zwicky, “QED-Corrections to Weak Decays,” Symmetry 13 no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 11, (2021) 2036, arXiv:2205.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='06194 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' [20] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Nabeebaccus and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Zwicky, “Resolving charged hadrons in QED — gauge invariant interpolating operators,” JHEP 11 (2022) 101, arXiv:2209.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='06925 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' [21] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Shifman, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Vainshtein, and V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Zakharov, “QCD and Resonance Physics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Theoretical Foundations,” Nucl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' B147 (1979) 385–447.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' [22] V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Novikov, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Shifman, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Vainshtein, and V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Zakharov, “Are All Hadrons Alike?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' ,” Nucl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' B 191 (1981) 301–369.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' [23] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Shuryak, “Pseudoscalar Mesons and Instantons,” Nucl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' B 214 (1983) 237–252.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' [24] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Balitsky, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Braun, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Kolesnichenko, “The decay Sigma+ —> p gamma in QCD: Bilocal corrections in a variable magnetic field and the photon wave functions,” Sov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Nucl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 48 (1988) 348–357.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' [25] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Pullin and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Zwicky, “Radiative Decays of Heavy-light Mesons and the f (T ) H,H∗,H1 Decay Constants,” arXiv:2106.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='13617 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' – 18 – [26] V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Nesterenko and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Radyushkin, “Sum Rules and Pion Form-Factor in QCD,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' B 115 (1982) 410.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' [27] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Kirk, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Lenz, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Rauh, “Dimension-six matrix elements for meson mixing and lifetimes from sum rules,” JHEP 12 (2017) 068, arXiv:1711.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='02100 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' [Erratum: JHEP 06, 162 (2020)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' [28] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Janowski, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Pullin, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Zwicky, “Charged and neutral Bu,d,s → γ form factors from light cone sum rules at NLO,” JHEP 12 (2021) 008, arXiv:2106.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='13616 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' [29] Particle Data Group Collaboration, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Zyla et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=', “Review of Particle Physics,” PTEP 2020 no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 8, (2020) 083C01.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' [30] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Colangelo, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Lanz, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Leutwyler, and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Passemar, “Dispersive analysis of η → 3π,” Eur.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' C 78 no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 11, (2018) 947, arXiv:1807.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='11937 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' [31] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Gell-Mann, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Oakes, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Renner, “Behavior of current divergences under SU(3) x SU(3),” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 175 (1968) 2195–2199.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' [32] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Donoghue, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Golowich, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Holstein, Dynamics of the standard model, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' CUP, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' [33] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Shifman, “Quark hadron duality,” in 8th International Symposium on Heavy Flavor Physics, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 3, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 1447–1494.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' World Scientific, Singapore, 7, 2000.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' arXiv:hep-ph/0009131.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' [34] I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Bigi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Shifman, and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Uraltsev, “Aspects of heavy quark theory,” Ann.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Nucl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Part.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Sci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 47 (1997) 591–661, arXiv:hep-ph/9703290.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' [35] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Bali, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Bruckmann, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Constantinou, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Costa, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Endrodi, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Katz, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Panagopoulos, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Schafer, “Magnetic susceptibility of QCD at zero and at finite temperature from the lattice,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' D 86 (2012) 094512, arXiv:1209.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='6015 [hep-lat].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' [36] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' McNeile, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Bazavov, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Davies, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Dowdall, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Hornbostel, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Lepage, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Trottier, “Direct determination of the strange and light quark condensates from full lattice QCD,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' D 87 no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 3, (2013) 034503, arXiv:1211.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='6577 [hep-lat].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' [37] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Ioffe, “Condensates in quantum chromodynamics,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Atom.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Nucl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 66 (2003) 30–43, arXiv:hep-ph/0207191.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' [38] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Jamin and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Lange, “fB and fBs from QCD sum rules,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' D65 (2002) 056005, arXiv:hep-ph/0108135 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' [39] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Ball and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Zwicky, “SU(3) breaking of leading-twist K and K* distribution amplitudes: A Reprise,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' B 633 (2006) 289–297, arXiv:hep-ph/0510338.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' [40] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Weinberg, “Precise relations between the spectra of vector and axial vector mesons,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 18 (1967) 507–509.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' [41] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Zwicky, “A brief Introduction to Dispersion Relations and Analyticity,” in Quantum Field Theory at the Limits: from Strong Fields to Heavy Quarks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 10, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' arXiv:1610.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='06090 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' [42] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Bharucha, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Straub, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Zwicky, “B → V ℓ+ℓ− in the Standard Model from light-cone sum rules,” JHEP 08 (2016) 098, arXiv:1503.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='05534 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' [43] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Gross, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Treiman, and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Wilczek, “Light Quark Masses and Isospin Violation,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' D 19 (1979) 2188.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' [44] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Witten, “Some Inequalities Among Hadron Masses,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 51 (1983) 2351.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' – 19 – [45] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Dashen, “Chiral SU(3) x SU(3) as a symmetry of the strong interactions,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 183 (1969) 1245–1260.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' [46] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Donoghue, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Holstein, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Wyler, “Electromagnetic selfenergies of pseudoscalar mesons and Dashen’s theorem,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' D 47 (1993) 2089–2097.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' [47] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Fodor, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Hoelbling, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Krieg, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Lellouch, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Lippert, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Portelli, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Sastre, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Szabo, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Varnhorst, “Up and down quark masses and corrections to Dashen’s theorem from lattice QCD and quenched QED,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 117 no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' 8, (2016) 082001, arXiv:1604.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='07112 [hep-lat].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' [48] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' Portelli, “Inclusion of isospin breaking effects in lattice simulations,” PoS LATTICE2014 (2015) 013, arXiv:1505.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content='07057 [hep-lat].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
+page_content=' – 20 –' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'}
diff --git a/btE3T4oBgHgl3EQfdgrZ/content/2301.04536v1.pdf b/btE3T4oBgHgl3EQfdgrZ/content/2301.04536v1.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..84f3dcaba7373de70a05fadc4704b8beac688e04
--- /dev/null
+++ b/btE3T4oBgHgl3EQfdgrZ/content/2301.04536v1.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:58515a567fbf265b9002e3c0135ad20e5192b2bfd7b4c1dc15a4c3780a817709
+size 7659197
diff --git a/btE3T4oBgHgl3EQfdgrZ/vector_store/index.faiss b/btE3T4oBgHgl3EQfdgrZ/vector_store/index.faiss
new file mode 100644
index 0000000000000000000000000000000000000000..f64e6d3489db97481195824a38de3a9d429d8377
--- /dev/null
+++ b/btE3T4oBgHgl3EQfdgrZ/vector_store/index.faiss
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9b31c770ed937ab6798d087dbdee442e6e91a7e4ab59ab18577afae57b3e8137
+size 3014701
diff --git a/btE3T4oBgHgl3EQfdgrZ/vector_store/index.pkl b/btE3T4oBgHgl3EQfdgrZ/vector_store/index.pkl
new file mode 100644
index 0000000000000000000000000000000000000000..a5656da33d6121ab358a56e0496087ebbc977c64
--- /dev/null
+++ b/btE3T4oBgHgl3EQfdgrZ/vector_store/index.pkl
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f1b5967b22c5189359f2952e114bf7d29a8d4d38f5086cf8a422b26fc2badd7c
+size 114528
diff --git a/ctE0T4oBgHgl3EQfWgAs/content/2301.02278v1.pdf b/ctE0T4oBgHgl3EQfWgAs/content/2301.02278v1.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..5fbbdf0f83bf99f88b28a520b877455efc10edbe
--- /dev/null
+++ b/ctE0T4oBgHgl3EQfWgAs/content/2301.02278v1.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:564cab746856271d5efcf116678ad1831fdbaefb1ac67d76bc5e2b599935a41b
+size 2545380
diff --git a/ctE0T4oBgHgl3EQfWgAs/vector_store/index.faiss b/ctE0T4oBgHgl3EQfWgAs/vector_store/index.faiss
new file mode 100644
index 0000000000000000000000000000000000000000..bd3222cfa40ccf4e703d96881edca3038684cd7f
--- /dev/null
+++ b/ctE0T4oBgHgl3EQfWgAs/vector_store/index.faiss
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:82be084c7725f245a8491a2ef7b92aacb8c86cff6ed924b338e9946d09986494
+size 4980781
diff --git a/ctE0T4oBgHgl3EQfWgAs/vector_store/index.pkl b/ctE0T4oBgHgl3EQfWgAs/vector_store/index.pkl
new file mode 100644
index 0000000000000000000000000000000000000000..55b70a76707bf4f6d053bad0fc0adb0d04b2c965
--- /dev/null
+++ b/ctE0T4oBgHgl3EQfWgAs/vector_store/index.pkl
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4ce6c3fd812725f43ab99305a4dafa14a00ed5b631a46213d882c64f7bb2a7c1
+size 155534
diff --git a/ddE2T4oBgHgl3EQfGAap/content/tmp_files/2301.03653v1.pdf.txt b/ddE2T4oBgHgl3EQfGAap/content/tmp_files/2301.03653v1.pdf.txt
new file mode 100644
index 0000000000000000000000000000000000000000..1ddb4d91e528deefd70dc57d2703861db91aa29c
--- /dev/null
+++ b/ddE2T4oBgHgl3EQfGAap/content/tmp_files/2301.03653v1.pdf.txt
@@ -0,0 +1,949 @@
+A Quantum Mechanical Description of Photosensitization in Photodynamic Therapy using
+a Two-Electron Molecule Approximation
+
+Vincent M. Rossi
+Washburn University Department of Physics & Astronomy, Topeka, KS 66621
+vincent.rossi@washburn.edu
+
+ABSTRACT
+A fundamental, Quantum Mechanical description of photoactivation of a generic photosensitizer
+and the ensuing transfer of energy to endogenous oxygen as part of the Type II pathway to
+photodamage during photodynamic therapy (PDT) is presented. The PS and molecular oxygen
+are approximated as two-electron molecules. Conservation of energy and of angular momenta of
+the two molecule system are abided via selection rules throughout the four-stage process,
+including initial states, absorption of a photon by the PS, conversion of the PS to an excited spin
+triplet via intersystem crossing (ISC), and the transition of molecular oxygen to an excited spin
+singlet state via a Triplet-Triplet Exchange of electrons with the PS. The provided description of
+photosensitization will provide students and researchers with a fundamental introduction to PDT,
+while offering the broader population of Quantum Mechanics and Physical Chemistry students
+an advanced example of quantum systems in an applied, medical context.
+
+Keywords: Photosensitization, Photodynamic Therapy (PDT), photochemistry, Dexter Exchange,
+Triplet-Triplet Exchange
+
+
+INTRODUCTION
+Photodynamic therapy (PDT) is a localized and selective therapy that operates on principles
+included under the generic classifications of photobiology, photochemistry and photophysics
+(Jacques 1992; Henderson and Dougherty 1992; Hamblin and Mroz 2008; Bonnett 2000; Hasan,
+Moore and Ortel 2000). While PDT has found its broadest application and research as a cancer
+therapy, it has also been used for antimicrobial therapy for combating antibiotic resistant strains
+(Wainwright 1998). Three ingredients are required for PDT—a photosensitizer (PS), light, and
+oxygen—in order to induce photochemical damage to its targets. In short, the PS is administered
+to the patient and after an appropriate time interval, the targeted site is illuminated with light of
+appropriate wavelength to be absorbed by the PS. Upon excitation by light of appropriate energy,
+the excited PS interacts with endogenous molecular oxygen in order to create reactive oxygen
+species (ROS). The interactions between the excited PS and endogenous molecular oxygen to
+generate ROS has been recognized and developed over some time (Kautsky 1939; Keszthelyl et
+al. 1999). These ROS then interact with their immediate environment, creating oxidative
+damage. Targeted cancer cells or bacteria are eliminated once they reach a threshold of damage
+via ROS (Nilsson, Merkel, and Kearns 1972; Schmidt and Bodesheim1998).
+
+Absorbed photons transfer discrete energies to the PS, raising it from the singlet ground state
+(1PS) to an excited singlet state (1PS*),
+1PS + hn ® 1PS*,
+(1)
+where the product of Planck's constant (h) and the frequency of light absorbed (n) represents the
+addition of energy via absorption (Fig. 1). The PS may then fluoresce back to its ground state.
+
+Preferably, the PS in its excited singlet state will transition to its excited triplet state (3PS*)
+through Intersystem Crossing (ISC),
+1PS* ® 3PS*.
+(2)
+
+Figure 1. The process leading to the preferred Type II path to photodamage starts when the PS is
+excited by incident light of energy ℎν. The PS then relaxes via ISC to an excited triplet state,
+whereby it can transfer energy to molecular oxygen via a triplet-triplet electron transfer.
+
+Once in the excited triplet state, the photosensitizer may then decay back to its ground state
+through one of two mechanisms. The first of which, called the Type I pathway to photodamage
+in PDT, involves the PS in its excited triplet state interacting with the surroundings, thereby
+losing energy and creating free radicals. The resulting free radicals may then react with
+endogenous oxygen to form cytotoxic species such as 𝑂𝐻! (Jacques 1992; Wainwright 1998;
+Ochsner 1997; Peavy 2002; Prasad 2003; Mata et al. 2006).
+
+The Type II pathway to photodamage in PDT entails a direct interaction between the PS in its
+excited triplet state and endogenous molecular oxygen in its triplet ground state (3O2). Such
+
+Type II
+E
+(Etransfer via
+1PS
+Intersystem
+triplet-triplet exchange)
+crossing (ISC)
+hv
+3PS
+10
+fluorescence
+phosphorescence
+1PS
+★interactions, termed a Triplet-Triplet Exchange, can also cause the PS agent to decay back to its
+singlet ground state, in turn raising the molecular oxygen to an excited singlet state (1O2*),
+3PS* + 3O2 ® 1PS + 1O2*.
+(3)
+The excited singlet state of molecular oxygen can then cause damage to its surroundings
+(Nilsson, Merkel, and Kearns 1972; Schmidt and Bodesheim1998). Due to the long lifetime of
+the excited triplet PS, sufficient time is allowed for interactions with endogenous oxygen. For
+this reason, the Type II pathway is generally accepted as the most common pathway to
+photodamage in PDT (Jacques 1992; Henderson and Dougherty 1992; Hamblin and Mroz 2008;
+Wainwright 1998; Ochsner 1997; Peavy 2002; Prasad 2003; Mata et al. 2006).
+
+The above introduction to PDT is given in a typical fashion as would be found in biological or
+medical descriptions of PDT (Kearns and Khan 1969). The remainder of this paper is interested
+in giving a more rigorous, quantum mechanical explanation of the process of photosensitization
+in PDT. The Quantum Mechanical processes involved in activation of the Type II pathway to
+photodamage will be covered in a simplified fashion so as to serve as an accessible description to
+students and researchers who are new to PDT research. The subset of researchers responsible for
+light delivery and light-tissue interactions in PDT may find this description useful. As such, the
+quantum notation more familiar to physicists will be used moving forward. In particular,
+quantum states of the PS and molecular oxygen will be treated as those of two-electron
+molecules. Representation of photosensitization in PDT using this notation will be more familiar
+to the students of quantum mechanics and physical chemistry while simultaneously appealing to
+a rigorous sensibility by detailing the physical phenomena associated with each step of the
+photosensitization process (Sec. 2). The addition of angular momentum between the two
+
+molecules will be employed in order to define the overall state of the system of molecules at
+each step. The larger discussion will be summarize at the end of the paper (Sec. 3).
+
+QUANTUM TWO-ELECTRON MODEL
+We will consider a basic quantum mechanical example of a generic PS interacting with
+molecular oxygen as part of the desired Type II pathway to photodamage achieved in PDT. A
+generic diagram of the photocativation of the PS and interactions with molecular oxygen are
+depicted in Fig. 1. In particular, this work is concerned with describing the interactions between
+the PS and molecular oxygen from the time the PS is excited via absorption of a photon through
+the transfer of energy to molecular oxygen via a Triplet-Triplet Exchange. As such, all other
+pathways will be ignored.
+
+Both the PS and molecular oxygen can be approximated as two-electron molecules. For example,
+molecular oxygen forms via the covalent bond between two oxygen atoms, each needing a pair
+of 2p electrons in order to fill the 2p shell (Turrens 2003). This pair of shared 2p electrons will
+therefore be considered as those undergoing the transitions that follow during the PDT process.
+The same assumption will be made of the PS, considering that the exchange of energy between
+the PS and molecular oxygen comes in the form of electron exchange between a pair of two
+electron systems.
+
+In quantum mechanics, we are concerned with eigenvalue problems where we can determine the
+given set of eigenstates corresponding to a given set of eigenvalues. The eigenstate of a system
+corresponds to the wavefunction of the system, or generically speaking, the state of the system.
+
+The eigenvalue corresponds to some physically measureable quantity, or characteristic of the
+system, such as its energy, spin or angular momentum. As alluded to here, the characteristics of a
+quantum state can have spatial and spin dependencies, such that their corresponding
+wavefunctions must also incorporate spatial and spin states. We can separate the overall
+wavefunction, Ψ(r⃗, m"), into the product of the two functional dependencies,
+Ψ(r⃗, m") = Φ(r⃗)χ(m"),
+(4)
+where 𝑟⃗ represents the three dimensional spatial dependence of the spatial wave function Φ(𝑟⃗)
+and 𝑚# is the spin quantum number, representing the spin dependence of the spin wavefunction
+𝜒(𝑚#). In this context of atomic and molecular physics, the wavefunction represents the overall
+state of an electron. Since electrons are Fermions, their overall wavefunctions must be
+antisymmetric.
+
+When looking specifically at the context of PDT, we are dealing with systems of two electron
+molecules. Therefore, the overall wavefunction (4) for both the PS and molecular oxygen must
+be modified to reflect a two electron system,
+Ψ(𝑟$333⃗, 𝑚#$; 𝑟%333⃗, 𝑚#%) = Φ(𝑟$333⃗, 𝑟%333⃗)𝜒(𝑚#$, 𝑚#%),
+(5)
+where the subscripts 1 and 2 represent the two separate electrons.
+
+From the requirement for electrons to have antisymmetric wavefunctions follows the definition
+of the singlet and triplet states, which refer specifically to the spin wavefunction, 𝜒(𝑚#$, 𝑚#%), of
+the two electron system. A combination of these two electrons in the spin state lead to a set of
+three possible symmetric wavefunctions,
+
+
+
+
+
+
+(6)
+
+where the + and - refer to the different combinations of spin up (𝑚# = +
+$
+%) and spin down
+(𝑚# = −
+$
+%) states, respectively. This state is specifically called the (spin) triplet state because
+there is a set three possible symmetric combinations for the two electron system. Similarly, there
+is only a single antisymmetric combination of spins,
+χ(m"$, m"%) =
+$
+√% ( χ± − χ∓ ),
+(7)
+which is therefore referred to as the (spin) singlet state (Sakurai 1994).
+
+One of the spin states from (6) or (7) can therefore be applied directly within the overall two
+electron wavefunction (5) for either the PS or molecular oxygen. This leaves us to more
+thoroughly define the spatial state of the system (Sakurai 1994). Resolving the spatial
+wavefunction will be based upon the quantum mechanical rules for dealing with systems of
+identical particles and the assumption that we can start from the model of the most simple of two
+electron systems---the helium atom. Under this premise, the spatial wave function can undergo a
+swap of electrons such that,
+,
+(8)
+where the wavefunctions ψ$)) and ψ*+, refer to electrons in the ground and possible excited
+states, respectively. The two states ψ$))(r$333⃗)ψ*+,(r%
+333⃗) and ψ$))(r%
+333⃗)ψ*+,(r$
+333⃗) account for a
+change of state via exchange of identical particles—changing the configuration of the system by
+exchanging the states of two electrons translates to a change of state. However, the total spatial
+state (8) is the superposition of these two states, which can be gained either by the addition or
+AChn
+icbVFda9swFJ
+XdbU3TfWTrY1/
+EQiGBEWyvH3sZ
+BPrSxSWpBAZI
+ytyIirJriQXg
+vBP6Z/qW/9N5d
+SlXZILgsM5+p
+K56YFZ9oEwZP
+n734+Gm/dA+
+/Pzl67fO9x8Tn
+ZeK0DHJea5uUq
+wpZ5KODTOc3h
+SKYpFyOk1vL2t
+9ek+VZrn8Z1YF
+jQVeSJYxgo2j
+ks4DGi1ZD91TY
+lWVhL9eUdSHfy
+HKFCY2rCzSd8r
+YqKogStliweE
+MokKzxIZBUL1
+9xtWcvHGRv02K
+sQuf7TX9SD
+1EwhkmnGwyCdc
+FtEDagC5oaJZ1
+HNM9JKag0hGOt
+Z2FQmNhiZRjh
+tGqjUtMCk1u8o
+DMHJRZUx3YdYw
+VPHDOHWa7ckQ
+au2fcdFgutVyJ
+1ToHNUm9qNblL
+m5Um+xNbJovSU
+EleBmUlhyaH9
+U7gnClKDF85gI
+li7q2QLHL3rj
+NtV0I4eaXt8E
+kGoTng9Pr0+6w
+18TRAsfgJ+iBE
+FyAIbgCIzAGxN
+vz+l7k/fZb/s
+A/8y9erL7X9By
+B/8ofPgO408Hz
+
+�(~r1,~r2) =
+1
+p
+2
+
+ 100(~r1) nlm(~r2) ± 100(~r2) nlm(~r1)
+�
+ACYH
+icbVFbS8MwGE
+07L3Pepr7pS3A
+Ik7nRDlFfhIEv
+Pio4FdZR0uzbF
+kzTmqTKCP2Tv
+vngi7/EdFbw9k
+HIyflOTpKTKOV
+Mac97dzKwuL
+ScnWltrq2vrFZ
+39q+VUkmKfRpw
+hN5HxEFnAnoa6
+Y53KcSBxuI
+seLor+3RNIxRJ
+xo2cpDGMyEWzM
+KNGWCuvPAZ2y
+Zhwa5edHxdTND
+/E5DiKYMGodV
+Y5LjShabUsCmr
+Gx0FiLXGgHqU
+23TzHzS9FO8et
+Eret+rDQl8t2X
+gtAjErLWlhve
+B1vXvgv8EvQG
+VdhfWXYJTQLAa
+hKSdKDXwv1UND
+pGaUgzXPFKSE
+PpAJDCwUJAY1N
+POAcnxgmREeJ9
+IOofGc/b7DkF
+ipWRxZUz0VP3
+uFeR/vUGmx2dD
+w0SaRD086Bx
+rFOcJE2HjEJV
+POZBYRKZu+K6Z
+RIQrX9kyIE/e
+T/4Lbsc/6Rx
+fHzd6zTKOKtpD
++6iJfHSKeugSX
+aE+oujNqThrzr
+rz7lbdTXfrU+
+o65Z4d9KPc3Q9
+tyLKF
+�(ms1, ms2) =
+8
+>
+<
+>
+:
+�++
+1
+p
+2(�+� + ��+)
+���
+
+subtraction of the two combinations. The addition of these two spatial states results in a
+symmetric spatial wave function. Conversely, the subtraction of the two states results in an
+antisymmetric spatial wave function.
+
+Now that the symmetric and antisymmetric representations of the spatial and spin states are
+defined, we look to their possible combinations for the overall wavefunction of the two electron
+system (Sakurai 1994). Since the electron wavefunction must have overall antisymmetry, the
+antisymmetric spin singlet state (7) must pair with the symmetric spatial state (8), giving the
+overall antisymmetric singlet state Ψ"-*.+/0(r$
+333⃗, m"$; r%
+333⃗, m"%),
+.
+(9)
+Similarly, the symmetric spin triplet (6) must pair to the antisymmetric spatial wavefunction (8),
+giving the overall antisymmetric triplet state Ψ01-2+/0(r$333⃗, m"$; r%
+333⃗, m"%),
+.
+(10)
+
+The system can be described in terms of the quantum numbers for orbital angular momentum (l),
+magnetic quantum number (ml), spin angular momentum (s), and spin quantum number (ms). In
+addition to the before mentioned quantum numbers comes the principle quantum number (n),
+which is associated with the energy of the electron orbital. Starting with the principle quantum
+number, which can take any nonzero, positive integer value (n = 1, 2, 3,...), we are able to define
+the allowed values of the angular momentum and magnetic quantum number as follows (Liboff
+1998):
+ACrH
+icbVFNi9swEJ
+Xdr236sWl7GV
+oKMSEBCsbS+F
+hV56awqb3RTLG
+FmRHbGy7EryQ
+hD+df0HvfXfVE
+kM2+7mgeDx5g3
+zNJM3Uhgbx3+
+C8MHDR4+fnDwd
+PHv+4uXp8NXrS
+1O3mvElq2WtVz
+k1XArFl1ZYyV
+eN5rTKJb/Kr7/
+s6lc3XBtRqwu7
+bXha0VKJQjBq
+vZQNf5GFEZkzQ
+pWS2w4+Ayk0ZQ
+53jpif2rp51wH
+JRVlKSIA0OzO
+O425Mbjhzustw
+dBCVrG7FeQSTY
+2avH3Hj6DBQ
+wrEiobcBhI7X
+PDbYgxELbxrZN
+pB9OeTycdRJAN
+R/Es3gPuE9yT
+EeqxyIa/ybpmb
+cWVZIak+C4sa
+mj2gomeTcgre
+ENZde05Imnivp
+Iqdsvu4P3XlD
+UWv/lIW9+m+Ho
+5Ux2yr3zoraj
+blb24nHaklri0
++pE6pLVfsMKh
+oJdgadpeDtdC
+cWbn1hDItfFZg
+G+qPZf19B34J+
+O6X75PL+Qx/mJ
+19Pxudj/t1nK
+C36B0aI4w+onP
+0FS3QErEgCr4F
+q+BHOAsvwiRM
+D9Yw6HveoP8QF
+n8BhxjQNA=
+latexit>
+ singlet =
+1
+p
+2
+
+ 100(~r1) nlm(~r2) + 100(~r2) nlm(~r1)
+�
+⇥ 1
+p
+2(�+� � ��+)
+AC63
+icbVJNbxMxEP
+UuHy3hoykcuVh
+EoESrRLuhKlyQ
+KnHhGCTSVopXK
+6/jTaza3sX2V
+os/wUuHECIK3
++IG/8Gb3arQpu
+RLD29eZ5M3Z
+ecaZNHP8Jwjt3
+793f23/Qe/jo8
+ZOD/uHTU13Wit
+A5KXmpznOsKW
+eSzg0znJ5XimK
+Rc3qWX7xv8meX
+VGlWyk9mU9FU
+4JVkBSPYeCo7D
+AI0yzRrGKU+
+MgfAdfQVQoTGz
+iLNKflbFT51D
+OViu+QFUjTuLY
+DdElJVa5LBm1p
+OTimpyO4BjuE
+Ht+h9qXaMqrFC
+JZylrkVEGEet6
+HYJqaBNU+iFa
+M6+vzZC1rxRF
+DkY7HMh7ARjF
+7VoHLnRFRy7q5
+5ZfxBP4m3A2y
+DpwAB0Mcv6v9G
+yJLWg0hCOtV4k
+cWVSi5VhFPXQ
+7WmFSYXeEUXH
+krsZ0jt9q0cfO
+mZJSxK5Y80cMv
++e8NiofVG5F4
+psFnrm7mG3JVb
+1KZ4m1omq9pQS
+dpGRc2hKWHz8H
+DJFCWGbzARD
+HvFZI19lsz/nv
+0/BKSmyPfBqfT
+SXI8Ofp4NDgZ
+duvYB8/BCzAEC
+XgDTsAHMANzQI
+J18CX4FnwPRfg
+1/BH+bKVh0N1
+5Bv6L8NdfLhrp
+xw=
+ triplet = 1
+p
+2
+
+ 100(~r1) nlm(~r2) � 100(~r2) nlm(~r1)
+�
+⇥ 1
+p
+3
+
+�++ + 1
+p
+2(�+� + ��+) + ���
+�
+
+l = 0, 1, 2,..., (n-1)
+(11)
+ml = -l, -l + 1, ..., 0, 1, 2,..., +l.
+(12)
+
+In addition to the limitations placed on the possible states of angular momentum and the
+corresponding magnetic quantum numbers, there are quantum rules for the combining angular
+momenta. The reasons for adding angular momenta at the quantum level could entail the need to
+consider multiple particles within a system, or even the combination of different forms of
+angular momenta. Both of these scenarios will affect our quantum mechanical discussion of
+PDT. If we begin by defining a generic angular momentum term, j, two angular momenta (j1 and
+j2) can be added to reach the following permitted values:
+𝑗345 = |𝑗$ − 𝑗%|
+(13)
+𝑗367 = 𝑗$ + 𝑗%.
+(14)
+Based on these maximum and minimum values of total angular momenta,
+𝑗 = |𝑗$ − 𝑗%|, … , 𝑗$ + 𝑗%
+(15)
+is the range of acceptable total angular momenta values (Liboff 1998). When dealing with the
+addition of angular momenta, the range of
+𝑚8 = −(𝑗$ + 𝑗%), , … ,0, … , 𝑗$ + 𝑗%
+(16)
+follows from (12) and (15).
+
+These allowed values for the quantum numbers are based on the solution for the spatial
+wavefunction of the hydrogen atom in spherical coordinates by separating radial and angular
+dependencies
+Φ(𝑟, θ, ϕ) = 𝑅(𝑟)𝑌9
+3!(θ, ϕ),
+(17)
+
+where R(r) represents the radial wavefunction and 𝑌9
+3!(θ, ϕ) the spherical harmonics. Of key
+importance is the orthonormality of these special functions. Stating the wavefunction in terms of
+the given quantum numbers via subscripts, Φ593!, taking the inner product of two such
+wavefunctions (or equivalently, integrating the product of the two wave functions over all space)
+returns
+AΦ5"9"3!
+"BΦ593!C = δ55"δ99"δ3!3!
+",
+(18)
+where any given delta function takes the value of zero when the respective indices differ and
+unity when they are the same (Liboff 1998).
+
+As an example illustrating the principle of conservation of energy, since the quantum number 𝑛
+is tied to the energy of a state, the inner product of the final (Φ5"9"3!
+") and initial (Φ593!) states
+will be zero if 𝑛: ≠ 𝑛, meaning the system cannot transition spontaneously and unperturbed
+between the two states. The result will be unity if 𝑛: = 𝑛, such that the transition between the
+two states does not violate the conservation of energy. The only way to change the energy of the
+system between the initial and final states is to operate on them by doing work on the system, or
+by letting the system itself do work. Since there is no operator acting on the energy of the states
+in (18), the energy of the system must remain the same between the final and initial states.
+
+Similarly, the conservation of angular momentum is thus upheld in reference to the angular
+momentum quantum number 𝑙 between the two states. A transition from Φ593! directly to
+Φ5"9"3!
+" is forbidden unless 𝑙: = 𝑙. This is of fundamental importance for the following
+discussion, as we shall see that the angular momentum of the PS goes from 𝑙 = 0 to 𝑙 = 1
+
+during activation in PDT. This transition is however perfectly acceptable as the PS is being acted
+on by the incident light—by absorbing a photon (which carries an angular momentum of 𝑙 = 1),
+the PS gains angular momentum in addition to energy. Later, this angular momentum will be
+transferred to molecular oxygen along with energy in order to elicit a phototoxic effect.
+Ultimately, when operating on one quantum state in order to cause it to transition to another
+quantum state, the operator acting on the system will invoke a set of selection rules as to which
+quantum transitions are allowed versus forbidden.
+
+One further note should be made on the notation employed. The spin angular momentum (s) and
+spin quantum number (𝑚#) have been left out of the above conversation. However, as the name
+suggests, spin angular momentum is another form of angular momentum, or at least behaves
+quantum mechanically in the exact fashion as does angular momentum. The addition of spin
+angular momenta therefore abides the general rules for addition of angular momenta (15). The
+spin angular momentum of an electron is 𝑠 =
+$
+%, such that the associated spin quantum numbers
+are 𝑚# = ±
+$
+%. Since both the PS and molecular oxygen of interest can each be considered two
+electron systems, their respective spin angular momenta can take values of 𝑠 = 0, 1 via the rules
+for addition of angular momenta. Therefore, the spin quantum numbers for each of these
+individual molecules can take the values 𝑚# = 0, ±1. The photon carries no spin angular
+momentum (𝑠 = 0, 𝑚# = 0).
+
+To begin our formal discussion of the quantum mechanical processes involved in PDT, we can
+use the addition of angular momenta in order to determine the state of each of the molecules
+using the condensed the notation
+
+|Ψ⟩3;9<=>9< = |𝑙, 𝑠; 𝑚9, 𝑚#⟩.
+(19)
+In this notation, the total state of the system is the product of the two molecular states
+ ,
+(20)
+where again PS and O refer to the photosensitizer and molecular oxygen, respectively.
+
+Initially, both the PS and oxygen reside in their ground states—the PS in a spin singlet and the
+molecular oxygen a spin triplet (Fig. 2a)—such that
+|Ψ⟩4 = |𝑙 = 0, 𝑠 = 0; 𝑚9 = 0, 𝑚# = 0⟩?@ ⊗ |𝑙 = 0, 𝑠 = 1; 𝑚9 = 0, 𝑚# = 0⟩A.
+(21)
+Again using the addition of angular momentum, this time between the two molecules, the overall
+initial state given in terms of the same quantum numbers becomes
+|Ψ⟩4 = |𝑙 = 0, 𝑠 = 1; 𝑚9 = 0, 𝑚# = 0, ±1⟩.
+(22)
+
+When the PS absorbs light of the appropriate wavelength, it transitions to an excited singlet state
+(Fig. 2b). Since the photon carries a quantum angular momentum of 𝑙 = 1, this transition
+corresponds to an increase in orbital angular momentum of Δ𝑙 = +1 within the PS. The state of
+molecular oxygen remains unchanged during this process. Upon absorption, the system
+transitions to the state
+|Ψ⟩6B# = |𝑙 = 1, 𝑠 = 0; 𝑚9 = 0, ±1, 𝑚# = 0⟩?@ ⊗ |𝑙 = 0, 𝑠 = 1; 𝑚9 = 0, 𝑚# = 0⟩A,
+(23)
+where again the addition of angular momentum between molecules gives the overall state
+|Ψ⟩6B# = |𝑙 = 1, 𝑠 = 1; 𝑚9 = 0, ±1, 𝑚# = 0, ±1⟩.
+(24)
+
+
+ACmn
+icfVFbaxNBFJ
+5dL63xluqD/p
+wMCh9KGFXSlso
+haLgBR+M2LSFT
+FhmJyfp0LksM
+2eFsOyP8q/45r
+9xkgbURj1wmI/
+vO7c5p6y0CpR
+lP5L0xs1btzc2
+73Tu3rv/4GF36
+9FpcLWXOJRO3
+9eioBaWRySIo
+3nlUdhSo1n5eW
+bhX72FX1Qzp7Q
+vMKxETOrpkoK
+ilTR/cajSnwQF
+PfCzjQWTZgHQt
+PCSzgCWJcHX1r
+uSBkM69qnFrh
+1tjYleuC86sE
+6B0Ih2CKRrc7i
+ye0/6r4v9DYo
+Oj2sn62NFgH+Q
+r02MoGRfc7nzh
+ZG7QktQhlGcV
+jRvhSUmNbYfX
+ASshL8UMRxFaE
+QcZN8vVtvAiMh
+OYOh/dEizZ3z
+MaYUKYmzJGkE
+X4bq2IP+mjWqa
+HowbZaua0MqrR
+tNaAzlY3Akmy
+qMkPY9ASK/irC
+AvhBeS4jU7cQn
+59S+vg9NX/Xy
+v/t5t3e8vVrH
+JnvKnrNtlrN9d
+szeswEbMpk8SY
+6St8m79Fn6Ov
+2QfrwKTZNVzmP
+2h6UnPwGetswx
+| isystem = | iP S ⌦ | iO
+= |l, s; ml, msiP S ⌦ |l, s; ml, msiO
+
+
+Figure 2. Energy level diagrams of the PDT process leading to the creation of singlet oxygen,
+depicted in a HOMO-LUMO representation. a) The initial states of the PS and molecular
+oxygen. b) The PS transitions to an excited spin singlet state via absorption. c) The PS transitions
+to an excited spin triplet state via Intersystem Crossing. d) Triplet-Triplet electron exchange
+between the PS and molecular oxygen leads to the final state of the system where the excited
+spin singlet state of oxygen is ready to impose oxidative damage in surrounding organisms.
+
+Once in the excited state, the PS can either transition back to its ground state via fluorescence, or
+undergo a nonradiative transition to a spin triplet state. The later process is desirable for the PDT
+process, allowing the PS in its excited triplet state to interact with molecular oxygen. The
+nonradiative process by which the PS moves from an excited spin singlet to an excited spin
+triplet state is known as Intersystem Crossing, whereby the spin of the excited electron is no
+���
+���
+�ν
+����
+���
+����������
+����
+���
+���
+����
+������������
+��
+��
+��
+��
+�� � �� �� � �� � � ���� � �����
+�� � �� �� � ����� � � ���� � �����
+�� � �� �� � ����� � � ���� � ��
+�� � �� �� � ����� � � ���� � ��
+
+longer paired to that of the electron in the ground state (Fig. 2c) (Bonnett 2000). Due to the
+conservation of spin angular momentum, the transition from a singlet to a triplet state is a
+quantum mechanically forbidden transition. However, Intersystem Crossing is made possible by
+spin-orbit coupling, where the orbital and spin angular momenta are combined to give possible
+total angular momenta given in (15). This nonradiative transition relies upon the overlap of the
+vibrational states of the initial and final states of the electron (Bonnett 2000; Sakurai 1994;
+Liboff 1998; Beljonne et al. 2001} Again, molecular oxygen remains in its ground state during
+this process. Via Intersystem Crossing, the system transitions to the state
+|Ψ⟩C@D = |𝑙 = 1, 𝑠 = 1; 𝑚9 = 0, ±1, 𝑚# = 0⟩?@ ⊗ |𝑙 = 0, 𝑠 = 1; 𝑚9 = 0, 𝑚# = 0⟩A.
+(25)
+The addition of angular momentum between molecules gives the possible states
+,
+(26)
+where the states 𝑠 = 0, 1, 2 are allowed along with their corresponding −𝑠 ≤ 𝑚# ≤ 𝑠 values.
+Although the excited spin triplet state of the PS may phosphoresce back to its ground state, this
+state has a long-lived life time such that interaction with molecular oxygen becomes more likely
+(Hatz, Poulsen and Ogilby 2008).
+
+The PS in its excited triplet state interacts with the molecular oxygen in its ground state (spin
+triplet) via a Triplet-Triplet Exchange of electrons (Fig. 2d). In this process, the excited electron
+of the PS transitions to the molecular oxygen and the electron with matching spin in the ground
+state of molecular oxygen transitions to the ground state of the PS. Along with this swapping of
+electrons comes an exchange of energy, such that the PS returns to its ground (spin singlet) state
+and the molecular oxygen transitions to an excited (spin singlet) state (Fig. 2d (Bonnett 2000;
+ACxH
+icjVFdSxtBFJ
+1d26rRtmn76Mu
+lQREqYTdIWygB
+iyD6FrFRIROW2
+clNHJyZXWdmC
+2GJP7JvxT/jZL
+OCX5VemOHc+6
+583HTXArouh
+vEC69ev1meW1
+sb+9t375oePp
+zYrDMc+z2Rmzl
+NmUQqNfSecxP
+PcIFOpxLP0cn+
+un/1GY0Wmf7lp
+jkPFJlqMBWfO
+U0nzhnrV0Z4V1
+DA9kZiURyf7M+
+jCFlQSyG68A7Y
+b/QCVlHLWjXZ
+orsBzPrU+rX1A
+daYLlaIBShtb8
+OWhHeJ/+Rfpf
+3bpvNyl2jt3vZ
+JmK2pHVcBTENe
+gReroJc0/dJTx
+QqF2XDJrB3GU
+u2HJjBNc4qxBC
+4s545dsgMPNV
+Noh2U1hBlsem
+YE48z4pR1U7H1
+HyZS1U5X6SsXc
+hX2szcntEHhx
+t+HpdB54VDzx
+UHjQoLYD5RGA
+mD3MmpB4wb4e8
+K/IZxp2fe8N
+/Qvz4yU/Bacd
+f23vHu+29rbr7
+1ghG+Qz2SYx+U
+b2yCHpkT7hwc
+9gEuTBVXgQytC
+GxaI0DGrPJ/Ig
+wutbqd7R9g=
+| iISC =|l = 1, s = 0; ml = 0, ±1, ms = 0i
++ |l = 1, s = 1; ml = 0, ±1, ms = 0, ±1i
++ |l = 1, s = 2; ml = 0, ±1, ms = 0, ±1, ±2i
+
+Dexter 1953). The Triplet-Triplet Exchange is also referred to as a Dexter Exchange, based upon
+the seminal work “A Theory of Sensitized Luminescence in Solids” written by D.L. Dexter,
+which thoroughly explains this process. While the focus of this section is to simply give a
+general description of the quantum states of the PS and molecular oxygen during the stages of
+PDT, the reader is referred to Dexter’s work for a more rigorous and thorough description of the
+exchange (Dexter 1953).
+
+Continuing with the same quantum numbers, the corresponding wave function for the system
+becomes
+|Ψ⟩EE = |𝑙 = 0, 𝑠 = 0; 𝑚9 = 0, 𝑚# = 0⟩?@ ⊗ |𝑙 = 1, 𝑠 = 0; 𝑚9 = 0, ±1, 𝑚# = 0⟩A.
+(27)
+The addition of angular momentum between molecules gives the state
+|Ψ⟩EE = |𝑙 = 1, 𝑠 = 0; 𝑚9 = 0, ±1, 𝑚# = 0⟩.
+(28)
+Given that the final state of this system must remain unchanged from that of (26) during this
+process, we can conclude that after the PS underwent Intersystem Crossing the system must have
+been in the first of those states listed in (26),
+|Ψ⟩C@D = |𝑙 = 1, 𝑠 = 0; 𝑚9 = 0, ±1, 𝑚# = 0⟩.
+(29)
+From this conclusion, it follows that after the PS undergoes Intersystem Crossing, the system
+must be described by the individual molecular states
+|Ψ⟩C@D = |𝑙 = 1, 𝑠 = 1; 𝑚9 = 0, ±1, 𝑚# = 0⟩?@ ⊗ |𝑙 = 0, 𝑠 = 1; 𝑚9 = 0, 𝑚# = 0⟩A.
+(30)
+
+Pay particular attention to tracking the transfer and conservation of angular momentum
+throughout the processes described. Both molecules are in their ground states initially. Upon
+excitation of the PS via absorption of a photon, the angular momentum of the system increases.
+
+While the angular momentum of the PS does not change during Intersystem Crossing, it does
+change in the final step as the angular momentum of the PS is transferred to that of the molecular
+oxygen. To better demonstrate the point, the final step of Figure 2—the triplet-triplet exchange
+between the PS and oxygen—is repeated again in Figure 3 along with the associated molecular
+orbitals of oxygen and protoporphyrin IX (PpIX), a typical photosensitizer employed clinically
+in PDT of cancers. The increase in angular momentum of molecular oxygen via the triplet-triplet
+exchange is visually apparent.
+
+Figure 3. The HOMO-LUMO representations employed in the final step of Figure 2 are
+represented here again, with the corresponding molecular orbitals of O2 and a common PS,
+protoporphyrin-IX (PpIX). Molecular orbitals were generated via the Amsterdam Density
+Functional program (te Velde et al. 2001).
+
+SUMMARY
+������
+���
+�����
+����
+������������
+�� � �� �� � ����� � � ���� � ��
+�� � �� �� � ����� � � ���� � ��
+
+A summary of the states of the PS—O2 system based upon the transitions and physical processes
+described would look as follows:
+1. The PS and molecular oxygen begin in their ground states, the PS in a spin singlet and the
+molecular oxygen a spin triplet,
+|Ψ⟩4 = |𝑙 = 0, 𝑠 = 0; 𝑚9 = 0, 𝑚# = 0⟩?@ ⊗ |𝑙 = 0, 𝑠 = 1; 𝑚9 = 0, 𝑚# = 0⟩A.
+(31)
+2. Upon absorption of a photon, the PS is raised to an excited spin singlet state, while the
+molecular oxygen goes unaffected,
+|Ψ⟩6B# = |𝑙 = 1, 𝑠 = 0; 𝑚9 = 0, ±1, 𝑚# = 0⟩?@ ⊗ |𝑙 = 0, 𝑠 = 1; 𝑚9 = 0, 𝑚# = 0⟩A.
+(32)
+3. The PS undergoes a nonradiative transition from the excited spin singlet to an excited spin
+triplet via Intersystem Crossing, while the state of the molecular oxygen again remains
+unchanged in its spin triplet ground state,
+|Ψ⟩C@D = |𝑙 = 1, 𝑠 = 1; 𝑚9 = 0, ±1, 𝑚# = 0⟩?@ ⊗ |𝑙 = 0, 𝑠 = 1; 𝑚9 = 0, 𝑚# = 0⟩A.
+(33)
+4. Finally, the molecular oxygen is raised from its ground spin triplet state to an excited spin
+singlet state as the PS simultaneously relaxes back from its excited spin triplet state to its spin
+singlet ground state,
+|Ψ⟩EE = |𝑙 = 0, 𝑠 = 0; 𝑚9 = 0, 𝑚# = 0⟩?@ ⊗ |𝑙 = 1, 𝑠 = 0; 𝑚9 = 0, ±1, 𝑚# = 0⟩A.
+(34)
+
+Again, a summary of these processes and states is also depicted in Figure 2, where the overall
+wavefunction of the system at each step is listed with the corresponding energy diagram.
+
+Simply put, energy from the excitation light is absorbed by the PS. Following some internal
+transitions, the PS is then able to transfer the added energy to the molecular oxygen via Triplet-
+Triplet Exchange. The final state of the PS—O2 system leaves the molecular oxygen in an
+
+excited state, ready to unleash oxidative stress on its immediate surroundings, ultimately causing
+potential lethal photodamage as a result of biologic interactions that lead to activation of cellular
+death pathways (Finkel and Holbrook 2000; Martindale and Holbrook 2002; Pisoschi and Pop
+2015; Apel and Hirt 2004).
+
+ACKNOWLEDGMENTS
+This publication was supported by an Institutional Development Award (IDeA) from the
+National Institute of General Medical Sciences of the National Institutes of Health under grant
+number P20 GM103418. The author would like to thank Henri J.F. Jansen for his advice while
+working through the details of this paper.
+
+LITERATURE CITED
+Apel, K. and Hirt, H. 2004. REACTIVE OXYGEN SPECIES: Metabolism, Oxidative Stress,
+and Signal Transduction. Annual Review of Plant Biology 55:373-399.
+Beljonne, D., Shuai, Z., Pourtois, G. and Bredas, J.L. 2001. Intersystem Crossing in Conjugated
+Polymers: A Configuration Interaction Description. Journal of Physical Chemistry A
+105(15):3899-3907.
+Bonnett, R. 2000. Chemical Aspects of Photodynamic Therapy. Vol. 1. Advanced Chemistry
+Texts, Gordon and Breach Science Publishers, Australia.
+Dexter, D.L. 1953. A Theory of Sensitized Luminescence in Solids. The Journal of Chemical
+Physics 21:836-850.
+Finkel, T. and Holbrook, N.J. 2000. Oxidants, oxidative stress and the biology of ageing. Nature
+408, 239-247.
+
+Hamblin, M.R. and Mroz, P. (Editors). 2008.Advances in Photodynamic Therapy: Basic,
+Translational, and Clinical. Engineering in Medicine and Biology Series, Artech House, Boston.
+Hasan, T., Moore, A.C.E. and Ortel, B. 2000. Photodynamic Theraphy of Cancer. pp. 489–502 in
+Cancer Medicine, 5th edition. BC Decker Inc.
+Hatz, S., Poulsen, L. and Ogilby, P.R. 2008. Time-resolved Singlet Oxygen Phosphorescence
+Measurements from Photosensitized Experiments in Single Cells: Effects of Oxygen Diffusion
+and Oxygen Concentration. Photochemistry and Photobiology 84:1284-1290.
+Henderson, B. and Dougherty, T. (Editors). 1992. Photodynamic Therapy: Basic Principles and
+Clinical Applications. Marcel Dekker, Inc., New York.
+Jacques, S.L. 1992. Laser-tissue interactions: photochemical, photothermal, and
+photomechanical. Surgical Clinics of North America 72:531-558.
+Kautsky, H. 1939. Quenching of Luminescence by Oxygen. Transactions of the Faraday Society
+35:216-219.
+Kearns, D.R. and Khan, A.U. 1969. Sensitized Photooxygenation Reactions and the Role of
+Singlet Oxygen. Photochemistry and Photobiology 10(3):193-210.
+Keszthelyl, T., Weldon, D., Andersen, T.N., Poulsen, T.D., Mikkelsen, K.V. and Ogilby, P.
+1999. Radiative Transitions of Singlet Oxygen: New Tools, New Techniques and New
+Interpretations. Photochemistry and Photobiology 70:531-539.
+Liboff, R.L. 1998. Introductory Quantum Mechanics, 3rd ed. Addison-Wesley, Reading, MA.
+Martindale, J.L. and Holbrook, N.J. 2002. Cellular Response to Oxidative Stress: Signaling for
+Suicide and Survival. Journal of Cellular Physiology 192:1-15.
+Mata, J.E., Dyal, L.A., Rossi, V.M. and Gustafson, S.B. 2006. Solid Tumor Physiology as a
+
+Target for Nanomedicines. ch 14, pp. 1-19 in Nalwa, H.S. and Webster, T. (eds.), Cancer
+Nanotechnology, American Scientific Publishers.
+Nilsson, R., Merkel, P.B and Kearns, D.R. 1972. Unambiguous Evidence for the Participation of
+Singlet Oxygen in Photodynamic Oxidation of Amino Acids. Photochemistry and Photobiology
+16:117-124.
+Ochsner, M. 1997. Photophysical and photobiological processes in the photodynamic therapy of
+tumors. Journal of Photochemistry and Photobiology B: Biology 39:1-18.
+Peavy, G.M. 2002. Lasers and laser—tissue interaction. Veterinary Clinics: Small Animal
+Practice 32:517-534.
+Pisoschi, A.M. and Pop, A. 2015. The role of antioxidants in the chemistry of oxidative stress: A
+review. European Journal of Medicinal Chemistry 2015, 97:55.
+Prasad, P.N. 2003. Introduction to Biophotonics. John Wiley and Sons, Inc., Hoboken, NJ.
+Sakurai, J.J. 1994. Modern Quantum Mechanics, revised ed. Addison-Wesley, Reading, MA.
+Schmidt, R. and Bodesheim, M. 1998. Radiationless Deactivation of the Second Excited Singlet
+State of O2 in Solution. The Journal of Physical Chemistry A 102:4769-4774.
+te Velde, G. T., Bickelhaupt, F. M., Baerends, E. J., Fonseca Guerra, C., van Gisbergen, S. J.,
+Snijders, J. G. and Ziegler, T. 2001. Chemistry with ADF. Journal of Computational Chemistry
+22(9):931-967.
+Turrens, J.F. 2003. Mitochondrial formation of reactive oxygen species. The Journal of
+Physiology 552(2):335-344.
+Wainwright, M. 1998. Photodynamic antimicrobial chemotherapy (PACT). Journal of
+Antimicrobial Chemotherapy 42:13-28.
+
+
+FIGURES
+
+Figure 1. The process leading to the preferred Type II path to photodamage starts when the PS is
+excited by incident light of energy ℎ𝜈. The PS then relaxes via ISC to an excited triplet state,
+whereby it can transfer energy to molecular oxygen via a triplet-triplet electron transfer.
+
+
+���
+���
+�ν
+����
+���
+����������
+����
+���
+���
+����
+������������
+��
+��
+��
+��
+�� � �� �� � �� � � ���� � �����
+�� � �� �� � ����� � � ���� � �����
+�� � �� �� � ����� � � ���� � ��
+�� � �� �� � ����� � � ���� � ��
+
+Type II
+E
+(Etransfer via
+1PS
+Intersystem
+triplet-triplet exchange)
+crossing (ISC)
+hv
+3PS
+10
+fluorescence
+phosphorescence
+1PS
+★Figure 2. Energy level diagrams of the PDT process leading to the creation of singlet oxygen,
+depicted in a HOMO-LUMO representation. a) The initial states of the PS and molecular
+oxygen. b) The PS transitions to an excited spin singlet state via absorption. c) The PS transitions
+to an excited spin triplet state via Intersystem Crossing. d) Triplet-Triplet electron exchange
+between the PS and molecular oxygen leads to the final state of the system where the excited
+spin singlet state of oxygen is ready to impose oxidative damage in surrounding organisms.
+
+
+Figure 3. The HOMO-LUMO representations employed in the final step of Figure 2 are
+represented here again, with the corresponding molecular orbitals of O2 and a common PS,
+protoporphyrin-IX (PpIX). Molecular orbitals were generated via the Amsterdam Density
+Functional program (te Velde et al. 2001).
+������
+���
+�����
+����
+������������
+�� � �� �� � ����� � � ���� � ��
+�� � �� �� � ����� � � ���� � ��
+
diff --git a/ddE2T4oBgHgl3EQfGAap/content/tmp_files/load_file.txt b/ddE2T4oBgHgl3EQfGAap/content/tmp_files/load_file.txt
new file mode 100644
index 0000000000000000000000000000000000000000..90353bd500b8d00b83272e7282e25591701917e8
--- /dev/null
+++ b/ddE2T4oBgHgl3EQfGAap/content/tmp_files/load_file.txt
@@ -0,0 +1,803 @@
+filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf,len=802
+page_content='A Quantum Mechanical Description of Photosensitization in Photodynamic Therapy using a Two-Electron Molecule Approximation Vincent M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Rossi Washburn University Department of Physics & Astronomy, Topeka, KS 66621 vincent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='rossi@washburn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='edu ABSTRACT A fundamental, Quantum Mechanical description of photoactivation of a generic photosensitizer and the ensuing transfer of energy to endogenous oxygen as part of the Type II pathway to photodamage during photodynamic therapy (PDT) is presented.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' The PS and molecular oxygen are approximated as two-electron molecules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Conservation of energy and of angular momenta of the two molecule system are abided via selection rules throughout the four-stage process, including initial states, absorption of a photon by the PS, conversion of the PS to an excited spin triplet via intersystem crossing (ISC), and the transition of molecular oxygen to an excited spin singlet state via a Triplet-Triplet Exchange of electrons with the PS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' The provided description of photosensitization will provide students and researchers with a fundamental introduction to PDT, while offering the broader population of Quantum Mechanics and Physical Chemistry students an advanced example of quantum systems in an applied, medical context.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Keywords: Photosensitization, Photodynamic Therapy (PDT), photochemistry, Dexter Exchange, Triplet-Triplet Exchange INTRODUCTION Photodynamic therapy (PDT) is a localized and selective therapy that operates on principles included under the generic classifications of photobiology, photochemistry and photophysics (Jacques 1992;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Henderson and Dougherty 1992;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Hamblin and Mroz 2008;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Bonnett 2000;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Hasan, Moore and Ortel 2000).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' While PDT has found its broadest application and research as a cancer therapy, it has also been used for antimicrobial therapy for combating antibiotic resistant strains (Wainwright 1998).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Three ingredients are required for PDT—a photosensitizer (PS), light, and oxygen—in order to induce photochemical damage to its targets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' In short, the PS is administered to the patient and after an appropriate time interval, the targeted site is illuminated with light of appropriate wavelength to be absorbed by the PS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Upon excitation by light of appropriate energy, the excited PS interacts with endogenous molecular oxygen in order to create reactive oxygen species (ROS).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' The interactions between the excited PS and endogenous molecular oxygen to generate ROS has been recognized and developed over some time (Kautsky 1939;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Keszthelyl et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 1999).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' These ROS then interact with their immediate environment, creating oxidative damage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Targeted cancer cells or bacteria are eliminated once they reach a threshold of damage via ROS (Nilsson, Merkel, and Kearns 1972;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Schmidt and Bodesheim1998).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=" Absorbed photons transfer discrete energies to the PS, raising it from the singlet ground state (1PS) to an excited singlet state (1PS*), 1PS + hn ® 1PS*, (1) where the product of Planck's constant (h) and the frequency of light absorbed (n) represents the addition of energy via absorption (Fig." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' The PS may then fluoresce back to its ground state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Preferably, the PS in its excited singlet state will transition to its excited triplet state (3PS*) through Intersystem Crossing (ISC), 1PS* ® 3PS*.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' (2) Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' The process leading to the preferred Type II path to photodamage starts when the PS is excited by incident light of energy ℎν.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' The PS then relaxes via ISC to an excited triplet state, whereby it can transfer energy to molecular oxygen via a triplet-triplet electron transfer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Once in the excited triplet state, the photosensitizer may then decay back to its ground state through one of two mechanisms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' The first of which, called the Type I pathway to photodamage in PDT, involves the PS in its excited triplet state interacting with the surroundings, thereby losing energy and creating free radicals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' The resulting free radicals may then react with endogenous oxygen to form cytotoxic species such as 𝑂𝐻!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' (Jacques 1992;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Wainwright 1998;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Ochsner 1997;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Peavy 2002;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Prasad 2003;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Mata et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 2006).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' The Type II pathway to photodamage in PDT entails a direct interaction between the PS in its excited triplet state and endogenous molecular oxygen in its triplet ground state (3O2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Such Type II E (Etransfer via 1PS Intersystem triplet-triplet exchange) crossing (ISC) hv 3PS 10 fluorescence phosphorescence 1PS ★interactions, termed a Triplet-Triplet Exchange, can also cause the PS agent to decay back to its singlet ground state, in turn raising the molecular oxygen to an excited singlet state (1O2*), 3PS* + 3O2 ® 1PS + 1O2*.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' (3) The excited singlet state of molecular oxygen can then cause damage to its surroundings (Nilsson, Merkel, and Kearns 1972;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Schmidt and Bodesheim1998).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Due to the long lifetime of the excited triplet PS, sufficient time is allowed for interactions with endogenous oxygen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' For this reason, the Type II pathway is generally accepted as the most common pathway to photodamage in PDT (Jacques 1992;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Henderson and Dougherty 1992;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Hamblin and Mroz 2008;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Wainwright 1998;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Ochsner 1997;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Peavy 2002;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Prasad 2003;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Mata et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 2006).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' The above introduction to PDT is given in a typical fashion as would be found in biological or medical descriptions of PDT (Kearns and Khan 1969).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' The remainder of this paper is interested in giving a more rigorous, quantum mechanical explanation of the process of photosensitization in PDT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' The Quantum Mechanical processes involved in activation of the Type II pathway to photodamage will be covered in a simplified fashion so as to serve as an accessible description to students and researchers who are new to PDT research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' The subset of researchers responsible for light delivery and light-tissue interactions in PDT may find this description useful.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' As such, the quantum notation more familiar to physicists will be used moving forward.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' In particular, quantum states of the PS and molecular oxygen will be treated as those of two-electron molecules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Representation of photosensitization in PDT using this notation will be more familiar to the students of quantum mechanics and physical chemistry while simultaneously appealing to a rigorous sensibility by detailing the physical phenomena associated with each step of the photosensitization process (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' The addition of angular momentum between the two molecules will be employed in order to define the overall state of the system of molecules at each step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' The larger discussion will be summarize at the end of the paper (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' QUANTUM TWO-ELECTRON MODEL We will consider a basic quantum mechanical example of a generic PS interacting with molecular oxygen as part of the desired Type II pathway to photodamage achieved in PDT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' A generic diagram of the photocativation of the PS and interactions with molecular oxygen are depicted in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' In particular, this work is concerned with describing the interactions between the PS and molecular oxygen from the time the PS is excited via absorption of a photon through the transfer of energy to molecular oxygen via a Triplet-Triplet Exchange.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' As such, all other pathways will be ignored.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Both the PS and molecular oxygen can be approximated as two-electron molecules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' For example, molecular oxygen forms via the covalent bond between two oxygen atoms, each needing a pair of 2p electrons in order to fill the 2p shell (Turrens 2003).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' This pair of shared 2p electrons will therefore be considered as those undergoing the transitions that follow during the PDT process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' The same assumption will be made of the PS, considering that the exchange of energy between the PS and molecular oxygen comes in the form of electron exchange between a pair of two electron systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' In quantum mechanics, we are concerned with eigenvalue problems where we can determine the given set of eigenstates corresponding to a given set of eigenvalues.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' The eigenstate of a system corresponds to the wavefunction of the system, or generically speaking, the state of the system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' The eigenvalue corresponds to some physically measureable quantity, or characteristic of the system, such as its energy, spin or angular momentum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' As alluded to here, the characteristics of a quantum state can have spatial and spin dependencies, such that their corresponding wavefunctions must also incorporate spatial and spin states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' We can separate the overall wavefunction, Ψ(r⃗, m"), into the product of the two functional dependencies, Ψ(r⃗, m") = Φ(r⃗)χ(m"), (4) where 𝑟⃗ represents the three dimensional spatial dependence of the spatial wave function Φ(𝑟⃗) and 𝑚# is the spin quantum number, representing the spin dependence of the spin wavefunction 𝜒(𝑚#).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' In this context of atomic and molecular physics, the wavefunction represents the overall state of an electron.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Since electrons are Fermions, their overall wavefunctions must be antisymmetric.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' When looking specifically at the context of PDT, we are dealing with systems of two electron molecules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Therefore, the overall wavefunction (4) for both the PS and molecular oxygen must be modified to reflect a two electron system, Ψ(𝑟$333⃗, 𝑚#$;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 𝑟%333⃗, 𝑚#%) = Φ(𝑟$333⃗, 𝑟%333⃗)𝜒(𝑚#$, 𝑚#%), (5) where the subscripts 1 and 2 represent the two separate electrons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' From the requirement for electrons to have antisymmetric wavefunctions follows the definition of the singlet and triplet states, which refer specifically to the spin wavefunction, 𝜒(𝑚#$, 𝑚#%), of the two electron system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' A combination of these two electrons in the spin state lead to a set of three possible symmetric wavefunctions, (6) where the + and - refer to the different combinations of spin up (𝑚# = + $ %) and spin down (𝑚# = − $ %) states, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' This state is specifically called the (spin) triplet state because there is a set three possible symmetric combinations for the two electron system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Similarly, there is only a single antisymmetric combination of spins, χ(m"$, m"%) = $ √% ( χ± − χ∓ ), (7) which is therefore referred to as the (spin) singlet state (Sakurai 1994).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' One of the spin states from (6) or (7) can therefore be applied directly within the overall two electron wavefunction (5) for either the PS or molecular oxygen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' This leaves us to more thoroughly define the spatial state of the system (Sakurai 1994).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Resolving the spatial wavefunction will be based upon the quantum mechanical rules for dealing with systems of identical particles and the assumption that we can start from the model of the most simple of two electron systems---the helium atom.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Under this premise, the spatial wave function can undergo a swap of electrons such that, , (8) where the wavefunctions ψ$)) and ψ*+, refer to electrons in the ground and possible excited states, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' The two states ψ$))(r$333⃗)ψ*+,(r% 333⃗) and ψ$))(r% 333⃗)ψ*+,(r$ 333⃗) account for a change of state via exchange of identical particles—changing the configuration of the system by exchanging the states of two electrons translates to a change of state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' However,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' the total spatial state (8) is the superposition of these two states,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' which can be gained either by the addition or ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='AChn ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='icbVFda9swFJ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='XdbU3TfWTrY1/ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='EQiGBEWyvH3sZ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='BPrSxSWpBAZI ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='ytyIirJriQXg ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='vBP6Z/qW/9N5d ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='SlXZILgsM5+p ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='K56YFZ9oEwZP ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='n734+Gm/dA+ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='/Pzl67fO9x8Tn ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='ZeK0DHJea5uUq ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='wpZ5KODTOc3h ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='SKYpFyOk1vL2t ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='9ek+VZrn8Z1YF ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='jQVeSJYxgo2j ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='ks4DGi1ZD91TY ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='lWVhL9eUdSHfy ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='HKFCY2rCzSd8r ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='YqKogStliweE ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='MokKzxIZBUL1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='9xtWcvHGRv02K ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='sQuf7TX9SD ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='1EwhkmnGwyCdc ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='FtEDagC5oaJZ1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='HNM9JKag0hGOt ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='Z2FQmNhiZRjh ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='tGqjUtMCk1u8o ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='DMHJRZUx3YdYw ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='VPHDOHWa7ckQ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='au2fcdFgutVyJ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='1ToHNUm9qNblL ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='m5Um+xNbJovSU ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='EleBmUlhyaH9 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='U7gnClKDF85gI ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='li7q2QLHL3rj ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='NtV0I4eaXt8E ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='kGoTng9Pr0+6w ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='18TRAsfgJ+iBE ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='FyAIbgCIzAGxN ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='vz+l7k/fZb/s ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='A/8y9erL7X9By ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='B/8ofPgO408Hz ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='�(~r1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='~r2) = ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='p ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='\uf8ff ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='100(~r1) nlm(~r2) ± ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='100(~r2) nlm(~r1) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='ACYH ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='icbVFbS8MwGE ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='07L3Pepr7pS3A ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='Ik7nRDlFfhIEv ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='Pio4FdZR0uzbF ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='kzTmqTKCP2Tv ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='vngi7/EdFbw9k ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='HIyflOTpKTKOV ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='Mac97dzKwuL ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='ScnWltrq2vrFZ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='39q+VUkmKfRpw ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='hN5HxEFnAnoa6 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='Y53KcSBxuI ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='seLor+3RNIxRJ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='xo2cpDGMyEWzM ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='KNGWCuvPAZ2y ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='Zhwa5edHxdTND ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='/E5DiKYMGodV ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='Y5LjShabUsCmr ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='Gx0FiLXGgHqU ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='23TzHzS9FO8et ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='Eret+rDQl8t2X ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='gtAjErLWlhve ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='B1vXvgv8EvQG ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='VdhfWXYJTQLAa ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='hKSdKDXwv1UND ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='pGaUgzXPFKSE ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='PpAJDCwUJAY1N ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='POAcnxgmREeJ9 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='IOofGc/b7DkF ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='ipWRxZUz0VP3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='uFeR/vUGmx2dD ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='w0SaRD086Bx ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='rFOcJE2HjEJV ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='POZBYRKZu+K6Z ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='RIQrX9kyIE/e ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='T/4Lbsc/6Rx ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='fHzd6zTKOKtpD ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='+6iJfHSKeugSX ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='aE+oujNqThrzr ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='rz7lbdTXfrU+ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='o65Z4d9KPc3Q9 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='tyLKF ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='�(ms1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' ms2) = 8 > < > : �++ 1 p 2(�+� + ��+) ��� subtraction of the two combinations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' The addition of these two spatial states results in a symmetric spatial wave function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Conversely, the subtraction of the two states results in an antisymmetric spatial wave function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Now that the symmetric and antisymmetric representations of the spatial and spin states are defined, we look to their possible combinations for the overall wavefunction of the two electron system (Sakurai 1994).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Since the electron wavefunction must have overall antisymmetry, the antisymmetric spin singlet state (7) must pair with the symmetric spatial state (8), giving the overall antisymmetric singlet state Ψ"-*.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='+/0(r$ 333⃗, m"$;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' r% 333⃗, m"%), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' (9) Similarly, the symmetric spin triplet (6) must pair to the antisymmetric spatial wavefunction (8), giving the overall antisymmetric triplet state Ψ01-2+/0(r$333⃗, m"$;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' r% 333⃗, m"%), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' (10) The system can be described in terms of the quantum numbers for orbital angular momentum (l), magnetic quantum number (ml), spin angular momentum (s), and spin quantum number (ms).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' In addition to the before mentioned quantum numbers comes the principle quantum number (n), which is associated with the energy of the electron orbital.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Starting with the principle quantum number, which can take any nonzero, positive integer value (n = 1, 2, 3,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' we are able to define ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='the allowed values of the angular momentum and magnetic quantum number as follows (Liboff ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='1998): ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='ACrH ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='icbVFNi9swEJ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='Xdr236sWl7GV ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='oKMSEBCsbS+F ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='hV56awqb3RTLG ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='FmRHbGy7EryQ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='hD+df0HvfXfVE ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='kM2+7mgeDx5g3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='zNJM3Uhgbx3+ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='C8MHDR4+fnDwd ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='PHv+4uXp8NXrS ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='1O3mvElq2WtVz ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='k1XArFl1ZYyV ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='eN5rTKJb/Kr7/ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='s6lc3XBtRqwu7 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='bXha0VKJQjBq ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='vZQNf5GFEZkzQ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='pWS2w4+Ayk0ZQ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='53jpif2rp51wH ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='JRVlKSIA0OzO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='O425Mbjhzustw ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='dBCVrG7FeQSTY ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='2avH3Hj6DBQ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='wrEiobcBhI7X ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='PDbYgxELbxrZN ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='pB9OeTycdRJAN ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='R/Es3gPuE9yT ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='EeqxyIa/ybpmb ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='cWVZIak+C4sa ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='mj2gomeTcgre ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='ENZde05Imnivp ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='Iqdsvu4P3XlD ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='UWv/lIW9+m+Ho ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='5Ux2yr3zoraj ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='blb24nHaklri0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='+pE6pLVfsMKh ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='oJdgadpeDtdC ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='cWbn1hDItfFZg ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='G+qPZf19B34J+ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='O6X75PL+Qx/mJ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='19Pxudj/t1nK ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='C36B0aI4w+onP ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='0FS3QErEgCr4F ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='q+BHOAsvwiRM ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='D9Yw6HveoP8QF ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='n8BhxjQNA= ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='latexit> ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='singlet = ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='p ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='\uf8ff ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='100(~r1) nlm(~r2) + ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='100(~r2) nlm(~r1) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='⇥ 1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='p ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='2(�+� � ��+) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='AC63 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='icbVJNbxMxEP ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='UuHy3hoykcuVh ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='EoESrRLuhKlyQ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='KnHhGCTSVopXK ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='6/jTaza3sX2V ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='os/wUuHECIK3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='+IG/8Gb3arQpu ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='RLD29eZ5M3Z ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='ecaZNHP8Jwjt3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='793f23/Qe/jo8 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='ZOD/uHTU13Wit ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='A5KXmpznOsKW ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='eSzg0znJ5XimK ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='Rc3qWX7xv8meX ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='VGlWyk9mU9FU ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='4JVkBSPYeCo7D ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='AI0yzRrGKU+ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='MgfAdfQVQoTGz ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='iLNKflbFT51D ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='OViu+QFUjTuLY ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='DdElJVa5LBm1p ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='OTimpyO4BjuE ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='Ht+h9qXaMqrFC ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='JZylrkVEGEet6 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='HYJqaBNU+iFa ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='M6+vzZC1rxRF ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='DkY7HMh7ARjF ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='7VoHLnRFRy7q5 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='5ZfxBP4m3A2y ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='DpwAB0Mcv6v9G ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='yJLWg0hCOtV4k ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='cWVSi5VhFPXQ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='7WmFSYXeEUXH ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='krsZ0jt9q0cfO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='mZJSxK5Y80cMv ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='+e8NiofVG5F4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='psFnrm7mG3JVb ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='1KZ4m1omq9pQS ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='dpGRc2hKWHz8H ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='DJFCWGbzARD ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='HvFZI19lsz/nv ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='0/BKSmyPfBqfT ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='SXI8Ofp4NDgZ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='duvYB8/BCzAEC ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='XgDTsAHMANzQI ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='J18CX4FnwPRfg ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='1/BH+bKVh0N1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='5Bv6L8NdfLhrp ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='xw= ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='triplet = 1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='p ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='\uf8ff ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='100(~r1) nlm(~r2) � ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='100(~r2) nlm(~r1) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='⇥ 1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='p ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='\uf8ff ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='�++ + 1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='p ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='2(�+� + ��+) + ��� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='l = 0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 2,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=', (n-1) (11) ml = -l, -l + 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=', 0, 1, 2,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=', +l.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' (12) In addition to the limitations placed on the possible states of angular momentum and the corresponding magnetic quantum numbers, there are quantum rules for the combining angular momenta.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' The reasons for adding angular momenta at the quantum level could entail the need to consider multiple particles within a system, or even the combination of different forms of angular momenta.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Both of these scenarios will affect our quantum mechanical discussion of PDT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' If we begin by defining a generic angular momentum term, j, two angular momenta (j1 and j2) can be added to reach the following permitted values: 𝑗345 = |𝑗$ − 𝑗%| (13) 𝑗367 = 𝑗$ + 𝑗%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' (14) Based on these maximum and minimum values of total angular momenta, 𝑗 = |𝑗$ − 𝑗%|, … , 𝑗$ + 𝑗% (15) is the range of acceptable total angular momenta values (Liboff 1998).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' When dealing with the addition of angular momenta, the range of 𝑚8 = −(𝑗$ + 𝑗%), , … ,0, … , 𝑗$ + 𝑗% (16) follows from (12) and (15).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' These allowed values for the quantum numbers are based on the solution for the spatial wavefunction of the hydrogen atom in spherical coordinates by separating radial and angular dependencies Φ(𝑟, θ, ϕ) = 𝑅(𝑟)𝑌9 3!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' (θ, ϕ), (17) where R(r) represents the radial wavefunction and 𝑌9 3!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' (θ, ϕ) the spherical harmonics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Of key importance is the orthonormality of these special functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Stating the wavefunction in terms of the given quantum numbers via subscripts, Φ593!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=', taking the inner product of two such wavefunctions (or equivalently, integrating the product of the two wave functions over all space) returns AΦ5"9"3!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' "BΦ593!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='C = δ55"δ99"δ3!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='3!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' ", (18) where any given delta function takes the value of zero when the respective indices differ and unity when they are the same (Liboff 1998).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' As an example illustrating the principle of conservation of energy, since the quantum number 𝑛 is tied to the energy of a state, the inner product of the final (Φ5"9"3!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' ") and initial (Φ593!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=') states will be zero if 𝑛: ≠ 𝑛, meaning the system cannot transition spontaneously and unperturbed between the two states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' The result will be unity if 𝑛: = 𝑛, such that the transition between the two states does not violate the conservation of energy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' The only way to change the energy of the system between the initial and final states is to operate on them by doing work on the system, or by letting the system itself do work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Since there is no operator acting on the energy of the states in (18), the energy of the system must remain the same between the final and initial states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Similarly, the conservation of angular momentum is thus upheld in reference to the angular momentum quantum number 𝑙 between the two states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' A transition from Φ593!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' directly to Φ5"9"3!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' " is forbidden unless 𝑙: = 𝑙.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' This is of fundamental importance for the following discussion, as we shall see that the angular momentum of the PS goes from 𝑙 = 0 to 𝑙 = 1 during activation in PDT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' This transition is however perfectly acceptable as the PS is being acted on by the incident light—by absorbing a photon (which carries an angular momentum of 𝑙 = 1), the PS gains angular momentum in addition to energy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Later, this angular momentum will be transferred to molecular oxygen along with energy in order to elicit a phototoxic effect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Ultimately, when operating on one quantum state in order to cause it to transition to another quantum state, the operator acting on the system will invoke a set of selection rules as to which quantum transitions are allowed versus forbidden.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' One further note should be made on the notation employed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' The spin angular momentum (s) and spin quantum number (𝑚#) have been left out of the above conversation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' However, as the name suggests, spin angular momentum is another form of angular momentum, or at least behaves quantum mechanically in the exact fashion as does angular momentum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' The addition of spin angular momenta therefore abides the general rules for addition of angular momenta (15).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' The spin angular momentum of an electron is 𝑠 = $ %, such that the associated spin quantum numbers are 𝑚# = ± $ %.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Since both the PS and molecular oxygen of interest can each be considered two electron systems, their respective spin angular momenta can take values of 𝑠 = 0, 1 via the rules for addition of angular momenta.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Therefore, the spin quantum numbers for each of these individual molecules can take the values 𝑚# = 0, ±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' The photon carries no spin angular momentum (𝑠 = 0, 𝑚# = 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' To begin our formal discussion of the quantum mechanical processes involved in PDT, we can use the addition of angular momenta in order to determine the state of each of the molecules using the condensed the notation |Ψ⟩3;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='9<=>9< = |𝑙, 𝑠;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 𝑚9, 𝑚#⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' (19) In this notation, the total state of the system is the product of the two molecular states , (20) where again PS and O refer to the photosensitizer and molecular oxygen, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Initially, both the PS and oxygen reside in their ground states—the PS in a spin singlet and the molecular oxygen a spin triplet (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 2a)—such that |Ψ⟩4 = |𝑙 = 0, 𝑠 = 0;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 𝑚9 = 0, 𝑚# = 0⟩?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' @ ⊗ |𝑙 = 0, 𝑠 = 1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 𝑚9 = 0, 𝑚# = 0⟩A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' (21) Again using the addition of angular momentum, this time between the two molecules, the overall initial state given in terms of the same quantum numbers becomes |Ψ⟩4 = |𝑙 = 0, 𝑠 = 1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 𝑚9 = 0, 𝑚# = 0, ±1⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' (22) When the PS absorbs light of the appropriate wavelength, it transitions to an excited singlet state (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 2b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Since the photon carries a quantum angular momentum of 𝑙 = 1, this transition corresponds to an increase in orbital angular momentum of Δ𝑙 = +1 within the PS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' The state of molecular oxygen remains unchanged during this process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Upon absorption, the system transitions to the state |Ψ⟩6B# = |𝑙 = 1, 𝑠 = 0;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 𝑚9 = 0, ±1, 𝑚# = 0⟩?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' @ ⊗ |𝑙 = 0, 𝑠 = 1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 𝑚9 = 0, 𝑚# = 0⟩A, (23) where again the addition of angular momentum between molecules gives the overall state |Ψ⟩6B# = |𝑙 = 1, 𝑠 = 1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 𝑚9 = 0, ±1, 𝑚# = 0, ±1⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='(24) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='ACmn ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='icfVFbaxNBFJ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='5dL63xluqD/p ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='wMCh9KGFXSlso ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='haLgBR+M2LSFT ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='FhmJyfp0LksM ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='2eFsOyP8q/45r ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='9xkgbURj1wmI/ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='vO7c5p6y0CpR ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='lP5L0xs1btzc2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='73Tu3rv/4GF36 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='9FpcLWXOJRO3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='9eioBaWRySIo ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='3nlUdhSo1n5eW ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='bhX72FX1Qzp7Q ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='vMKxETOrpkoK ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='ilTR/cajSnwQF ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='PfCzjQWTZgHQt ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='PCSzgCWJcHX1r ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='uSBkM69qnFrh ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='1tjYleuC86sE ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='6B0Ih2CKRrc7i ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='ye0/6r4v9DYo ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='Oj2sn62NFgH+Q ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='r02MoGRfc7nzh ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='ZG7QktQhlGcV ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='jRvhSUmNbYfX ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='ASshL8UMRxFaE ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='QcZN8vVtvAiMh ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='OYOh/dEizZ3z ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='MaYUKYmzJGkE ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='X4bq2IP+mjWqa ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='HowbZaua0MqrR ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='tNaAzlY3Akmy ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='qMkPY9ASK/irC ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='AvhBeS4jU7cQn ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='59S+vg9NX/Xy ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='v/t5t3e8vVrH ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='JnvKnrNtlrN9d ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='szeswEbMpk8SY ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='6St8m79Fn6Ov ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='2QfrwKTZNVzmP ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='2h6UnPwGetswx ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='| isystem = | iP S ⌦ | iO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='= |l,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' s;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' ml, msiP S ⌦ |l, s;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' ml, msiO Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Energy level diagrams of the PDT process leading to the creation of singlet oxygen, depicted in a HOMO-LUMO representation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' a) The initial states of the PS and molecular oxygen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' b) The PS transitions to an excited spin singlet state via absorption.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' c) The PS transitions to an excited spin triplet state via Intersystem Crossing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' d) Triplet-Triplet electron exchange between the PS and molecular oxygen leads to the final state of the system where the excited spin singlet state of oxygen is ready to impose oxidative damage in surrounding organisms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Once in the excited state, the PS can either transition back to its ground state via fluorescence, or undergo a nonradiative transition to a spin triplet state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' The later process is desirable for the PDT process, allowing the PS in its excited triplet state to interact with molecular oxygen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' The nonradiative process by which the PS moves from an excited spin singlet to an excited spin triplet state is known as Intersystem Crossing, whereby the spin of the excited electron is no ��� ��� �ν ���� ��� ���������� ���� ��� ��� ���� ������������ �� �� �� �� �� � �� �� � �� � � ���� � ����� �� � �� �� � ����� � � ���� � ����� �� � �� �� � ����� � � ���� � �� �� � �� �� � ����� � � ���� � �� longer paired to that of the electron in the ground state (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 2c) (Bonnett 2000).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Due to the conservation of spin angular momentum, the transition from a singlet to a triplet state is a quantum mechanically forbidden transition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' However, Intersystem Crossing is made possible by spin-orbit coupling, where the orbital and spin angular momenta are combined to give possible total angular momenta given in (15).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' This nonradiative transition relies upon the overlap of the vibrational states of the initial and final states of the electron (Bonnett 2000;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Sakurai 1994;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Liboff 1998;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Beljonne et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 2001} Again, molecular oxygen remains in its ground state during this process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Via Intersystem Crossing, the system transitions to the state |Ψ⟩C@D = |𝑙 = 1, 𝑠 = 1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 𝑚9 = 0, ±1, 𝑚# = 0⟩?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' @ ⊗ |𝑙 = 0, 𝑠 = 1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 𝑚9 = 0, 𝑚# = 0⟩A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' (25) The addition of angular momentum between molecules gives the possible states , (26) where the states 𝑠 = 0, 1, 2 are allowed along with their corresponding −𝑠 ≤ 𝑚# ≤ 𝑠 values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Although the excited spin triplet state of the PS may phosphoresce back to its ground state, this state has a long-lived life time such that interaction with molecular oxygen becomes more likely (Hatz, Poulsen and Ogilby 2008).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' The PS in its excited triplet state interacts with the molecular oxygen in its ground state (spin triplet) via a Triplet-Triplet Exchange of electrons (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 2d).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' In this process, the excited electron of the PS transitions to the molecular oxygen and the electron with matching spin in the ground state of molecular oxygen transitions to the ground state of the PS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Along with this swapping of electrons comes an exchange of energy, such that the PS returns to its ground (spin singlet) state and the molecular oxygen transitions to an excited (spin singlet) state (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 2d (Bonnett 2000;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='ACxH ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='icjVFdSxtBFJ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='1d26rRtmn76Mu ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='lQREqYTdIWygB ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='iyD6FrFRIROW2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='clNHJyZXWdmC ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='2GJP7JvxT/jZL ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='OCX5VemOHc+6 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='583HTXArouh ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='vEC69ev1meW1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='sb+9t375oePp ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='zYrDMc+z2Rmzl ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='NmUQqNfSecxP ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='PcIFOpxLP0cn+ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='un/1GY0Wmf7lp ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='jkPFJlqMBWfO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='U0nzhnrV0Z4V1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='DA9kZiURyf7M+ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='jCFlQSyG68A7Y ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='b/QCVlHLWjXZ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='orsBzPrU+rX1A ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='daYLlaIBShtb8 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='OWhHeJ/+Rfpf ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='3bpvNyl2jt3vZ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='JmK2pHVcBTENe ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='gReroJc0/dJTx ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='QqF2XDJrB3GU ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='u2HJjBNc4qxBC ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='4s545dsgMPNV ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='Noh2U1hBlsem ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='YE48z4pR1U7H1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='HyZS1U5X6SsXc ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='hX2szcntEHhx ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='t+HpdB54VDzx ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='UHjQoLYD5RGA ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='mD3MmpB4wb4e8 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='K/IZxp2fe8N ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='/Qvz4yU/Bacd ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='f23vHu+29rbr7 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='1ghG+Qz2SYx+U ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='b2yCHpkT7hwc ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='9gEuTBVXgQytC ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='GxaI0DGrPJ/Ig ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='wutbqd7R9g= ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='| iISC =|l = 1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' s = 0;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' ml = 0, ±1, ms = 0i + |l = 1, s = 1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' ml = 0, ±1, ms = 0, ±1i + |l = 1, s = 2;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' ml = 0, ±1, ms = 0, ±1, ±2i Dexter 1953).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' The Triplet-Triplet Exchange is also referred to as a Dexter Exchange, based upon the seminal work “A Theory of Sensitized Luminescence in Solids” written by D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Dexter, which thoroughly explains this process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' While the focus of this section is to simply give a general description of the quantum states of the PS and molecular oxygen during the stages of PDT, the reader is referred to Dexter’s work for a more rigorous and thorough description of the exchange (Dexter 1953).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Continuing with the same quantum numbers, the corresponding wave function for the system becomes |Ψ⟩EE = |𝑙 = 0, 𝑠 = 0;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 𝑚9 = 0, 𝑚# = 0⟩?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' @ ⊗ |𝑙 = 1, 𝑠 = 0;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 𝑚9 = 0, ±1, 𝑚# = 0⟩A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' (27) The addition of angular momentum between molecules gives the state |Ψ⟩EE = |𝑙 = 1, 𝑠 = 0;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 𝑚9 = 0, ±1, 𝑚# = 0⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' (28) Given that the final state of this system must remain unchanged from that of (26) during this process, we can conclude that after the PS underwent Intersystem Crossing the system must have been in the first of those states listed in (26), |Ψ⟩C@D = |𝑙 = 1, 𝑠 = 0;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 𝑚9 = 0, ±1, 𝑚# = 0⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' (29) From this conclusion, it follows that after the PS undergoes Intersystem Crossing, the system must be described by the individual molecular states |Ψ⟩C@D = |𝑙 = 1, 𝑠 = 1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 𝑚9 = 0, ±1, 𝑚# = 0⟩?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' @ ⊗ |𝑙 = 0, 𝑠 = 1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 𝑚9 = 0, 𝑚# = 0⟩A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' (30) Pay particular attention to tracking the transfer and conservation of angular momentum throughout the processes described.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Both molecules are in their ground states initially.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Upon excitation of the PS via absorption of a photon, the angular momentum of the system increases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' While the angular momentum of the PS does not change during Intersystem Crossing, it does change in the final step as the angular momentum of the PS is transferred to that of the molecular oxygen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' To better demonstrate the point, the final step of Figure 2—the triplet-triplet exchange between the PS and oxygen—is repeated again in Figure 3 along with the associated molecular orbitals of oxygen and protoporphyrin IX (PpIX), a typical photosensitizer employed clinically in PDT of cancers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' The increase in angular momentum of molecular oxygen via the triplet-triplet exchange is visually apparent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' The HOMO-LUMO representations employed in the final step of Figure 2 are represented here again, with the corresponding molecular orbitals of O2 and a common PS, protoporphyrin-IX (PpIX).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Molecular orbitals were generated via the Amsterdam Density Functional program (te Velde et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 2001).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' SUMMARY ������ ��� ����� ���� ������������ �� � �� �� � ����� � � ���� � �� �� � �� �� � ����� � � ���� � �� A summary of the states of the PS—O2 system based upon the transitions and physical processes described would look as follows: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' The PS and molecular oxygen begin in their ground states, the PS in a spin singlet and the molecular oxygen a spin triplet, |Ψ⟩4 = |𝑙 = 0, 𝑠 = 0;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 𝑚9 = 0, 𝑚# = 0⟩?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' @ ⊗ |𝑙 = 0, 𝑠 = 1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 𝑚9 = 0, 𝑚# = 0⟩A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' (31) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Upon absorption of a photon, the PS is raised to an excited spin singlet state, while the molecular oxygen goes unaffected, |Ψ⟩6B# = |𝑙 = 1, 𝑠 = 0;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 𝑚9 = 0, ±1, 𝑚# = 0⟩?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' @ ⊗ |𝑙 = 0, 𝑠 = 1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 𝑚9 = 0, 𝑚# = 0⟩A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' (32) 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' The PS undergoes a nonradiative transition from the excited spin singlet to an excited spin triplet via Intersystem Crossing, while the state of the molecular oxygen again remains unchanged in its spin triplet ground state, |Ψ⟩C@D = |𝑙 = 1, 𝑠 = 1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 𝑚9 = 0, ±1, 𝑚# = 0⟩?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' @ ⊗ |𝑙 = 0, 𝑠 = 1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 𝑚9 = 0, 𝑚# = 0⟩A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' (33) 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Finally, the molecular oxygen is raised from its ground spin triplet state to an excited spin singlet state as the PS simultaneously relaxes back from its excited spin triplet state to its spin singlet ground state, |Ψ⟩EE = |𝑙 = 0, 𝑠 = 0;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 𝑚9 = 0, 𝑚# = 0⟩?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' @ ⊗ |𝑙 = 1, 𝑠 = 0;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 𝑚9 = 0, ±1, 𝑚# = 0⟩A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' (34) Again, a summary of these processes and states is also depicted in Figure 2, where the overall wavefunction of the system at each step is listed with the corresponding energy diagram.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Simply put, energy from the excitation light is absorbed by the PS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Following some internal transitions, the PS is then able to transfer the added energy to the molecular oxygen via Triplet- Triplet Exchange.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' The final state of the PS—O2 system leaves the molecular oxygen in an excited state, ready to unleash oxidative stress on its immediate surroundings, ultimately causing potential lethal photodamage as a result of biologic interactions that lead to activation of cellular death pathways (Finkel and Holbrook 2000;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Martindale and Holbrook 2002;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Pisoschi and Pop 2015;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Apel and Hirt 2004).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' ACKNOWLEDGMENTS This publication was supported by an Institutional Development Award (IDeA) from the National Institute of General Medical Sciences of the National Institutes of Health under grant number P20 GM103418.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' The author would like to thank Henri J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Jansen for his advice while working through the details of this paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' LITERATURE CITED Apel, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' and Hirt, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 2004.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' REACTIVE OXYGEN SPECIES: Metabolism, Oxidative Stress, and Signal Transduction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Annual Review of Plant Biology 55:373-399.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Beljonne, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=', Shuai, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=', Pourtois, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' and Bredas, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 2001.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Intersystem Crossing in Conjugated Polymers: A Configuration Interaction Description.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Journal of Physical Chemistry A 105(15):3899-3907.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Bonnett, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 2000.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Chemical Aspects of Photodynamic Therapy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Advanced Chemistry Texts, Gordon and Breach Science Publishers, Australia.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Dexter, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 1953.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' A Theory of Sensitized Luminescence in Solids.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' The Journal of Chemical Physics 21:836-850.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Finkel, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' and Holbrook, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 2000.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Oxidants, oxidative stress and the biology of ageing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Nature 408, 239-247.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Hamblin, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' and Mroz, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' (Editors).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='Advances in Photodynamic Therapy: Basic, Translational, and Clinical.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Engineering in Medicine and Biology Series, Artech House, Boston.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Hasan, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=', Moore, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' and Ortel, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 2000.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Photodynamic Theraphy of Cancer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 489–502 in Cancer Medicine, 5th edition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' BC Decker Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Hatz, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=', Poulsen, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' and Ogilby, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Time-resolved Singlet Oxygen Phosphorescence Measurements from Photosensitized Experiments in Single Cells: Effects of Oxygen Diffusion and Oxygen Concentration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Photochemistry and Photobiology 84:1284-1290.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Henderson, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' and Dougherty, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' (Editors).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 1992.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Photodynamic Therapy: Basic Principles and Clinical Applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Marcel Dekker, Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=', New York.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Jacques, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 1992.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Laser-tissue interactions: photochemical, photothermal, and photomechanical.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Surgical Clinics of North America 72:531-558.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Kautsky, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 1939.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Quenching of Luminescence by Oxygen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Transactions of the Faraday Society 35:216-219.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Kearns, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' and Khan, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 1969.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Sensitized Photooxygenation Reactions and the Role of Singlet Oxygen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Photochemistry and Photobiology 10(3):193-210.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Keszthelyl, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=', Weldon, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=', Andersen, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=', Poulsen, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=', Mikkelsen, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' and Ogilby, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 1999.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Radiative Transitions of Singlet Oxygen: New Tools, New Techniques and New Interpretations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Photochemistry and Photobiology 70:531-539.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Liboff, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 1998.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Introductory Quantum Mechanics, 3rd ed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Addison-Wesley, Reading, MA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Martindale, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' and Holbrook, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 2002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Cellular Response to Oxidative Stress: Signaling for Suicide and Survival.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Journal of Cellular Physiology 192:1-15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Mata, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=', Dyal, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=', Rossi, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' and Gustafson, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Solid Tumor Physiology as a Target for Nanomedicines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' ch 14, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 1-19 in Nalwa, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' and Webster, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' (eds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' ), Cancer Nanotechnology, American Scientific Publishers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Nilsson, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=', Merkel, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='B and Kearns, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 1972.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Unambiguous Evidence for the Participation of Singlet Oxygen in Photodynamic Oxidation of Amino Acids.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Photochemistry and Photobiology 16:117-124.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Ochsner, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 1997.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Photophysical and photobiological processes in the photodynamic therapy of tumors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Journal of Photochemistry and Photobiology B: Biology 39:1-18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Peavy, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 2002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Lasers and laser—tissue interaction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Veterinary Clinics: Small Animal Practice 32:517-534.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Pisoschi, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' and Pop, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' The role of antioxidants in the chemistry of oxidative stress: A review.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' European Journal of Medicinal Chemistry 2015, 97:55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Prasad, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Introduction to Biophotonics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' John Wiley and Sons, Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=', Hoboken, NJ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Sakurai, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 1994.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Modern Quantum Mechanics, revised ed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Addison-Wesley, Reading, MA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Schmidt, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' and Bodesheim, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 1998.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Radiationless Deactivation of the Second Excited Singlet State of O2 in Solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' The Journal of Physical Chemistry A 102:4769-4774.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' te Velde, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=', Bickelhaupt, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=', Baerends, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=', Fonseca Guerra, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=', van Gisbergen, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=', Snijders, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' and Ziegler, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 2001.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Chemistry with ADF.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Journal of Computational Chemistry 22(9):931-967.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Turrens, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content='F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Mitochondrial formation of reactive oxygen species.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' The Journal of Physiology 552(2):335-344.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Wainwright, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 1998.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Photodynamic antimicrobial chemotherapy (PACT).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Journal of Antimicrobial Chemotherapy 42:13-28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' FIGURES Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' The process leading to the preferred Type II path to photodamage starts when the PS is excited by incident light of energy ℎ𝜈.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' The PS then relaxes via ISC to an excited triplet state, whereby it can transfer energy to molecular oxygen via a triplet-triplet electron transfer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' ��� ��� �ν ���� ��� ���������� ���� ��� ��� ���� ������������ �� �� �� �� �� � �� �� � �� � � ���� � ����� �� � �� �� � ����� � � ���� � ����� �� � �� �� � ����� � � ���� � �� �� � �� �� � ����� � � ���� � �� Type II E (Etransfer via 1PS Intersystem triplet-triplet exchange) crossing (ISC) hv 3PS 10 fluorescence phosphorescence 1PS ★Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Energy level diagrams of the PDT process leading to the creation of singlet oxygen, depicted in a HOMO-LUMO representation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' a) The initial states of the PS and molecular oxygen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' b) The PS transitions to an excited spin singlet state via absorption.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' c) The PS transitions to an excited spin triplet state via Intersystem Crossing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' d) Triplet-Triplet electron exchange between the PS and molecular oxygen leads to the final state of the system where the excited spin singlet state of oxygen is ready to impose oxidative damage in surrounding organisms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' The HOMO-LUMO representations employed in the final step of Figure 2 are represented here again, with the corresponding molecular orbitals of O2 and a common PS, protoporphyrin-IX (PpIX).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' Molecular orbitals were generated via the Amsterdam Density Functional program (te Velde et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' 2001).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
+page_content=' ������ ��� ����� ���� ������������ �� � �� �� � ����� � � ���� � �� �� � �� �� � ����� � � ���� � ��' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'}
diff --git a/ddFST4oBgHgl3EQfEzgN/content/tmp_files/2301.13715v1.pdf.txt b/ddFST4oBgHgl3EQfEzgN/content/tmp_files/2301.13715v1.pdf.txt
new file mode 100644
index 0000000000000000000000000000000000000000..c6fb5aeb6adc1c28bdb80bdc7f007cbab595c1d6
--- /dev/null
+++ b/ddFST4oBgHgl3EQfEzgN/content/tmp_files/2301.13715v1.pdf.txt
@@ -0,0 +1,1407 @@
+Physics-constrained 3D Convolutional Neural Networks for Electrodynamics
+Alexander Scheinker* and Reeju Pokharel
+Los Alamos National Laboratory
+*Electronic Email: ascheink@lanl.gov
+
+Abstract
+We present a physics-constrained neural network (PCNN) approach to solving Maxwell’s equations for the electromagnetic
+fields of intense relativistic charged particle beams. We create a 3D convolutional PCNN to map time-varying current and charge
+densities J(r,t) and ρ(r,t) to vector and scalar potentials A(r,t) and φ(r,t) from which we generate electromagnetic fields according
+to Maxwell’s equations: B = ∇×A, E = −∇φ −∂A/∂t. Our PCNNs satisfy hard constraints, such as ∇ · B = 0, by construction. Soft
+constraints push A and φ towards satisfying the Lorenz gauge.
+
+INTRODUCTION
+Electrodynamics is ubiquitous in describing physical
+processes governed by charged particle dynamics including
+everything from models of universe expansion, galactic disks
+forming cosmic ray halos, accelerator-based high energy X-ray
+light sources, achromatic metasurfaces, metasurfaces for
+dynamic holography and on-chip diffractive neural net-
+works, down to the radiative damping of individual
+accelerated electrons [1-21].
+Despite
+widely
+available
+high-performance
+computing, numerically calculating relativistic charged
+particle dynamics is still a challenge and an open area of
+research for large collections of particles undergoing
+collective effects in dynamics involving plasma turbulence
+[22], space charge forces [23,24], and coherent synchrotron
+radiation [25,26]. For example, the photo-injectors of modern
+X-ray free electron lasers such as the LCLS, SwissFEL, and
+EuXFEL and plasma wakefield accelerators such as FACET-II
+can produce high quality intense bunches with few
+picosecond rms lengths of up to 2 nC charge per bunch that
+are accelerated and squeezed down to lengths of tens to
+hundreds of femtoseconds [27-32]. At low energy near the
+injector, the 6D phase space (x,y,z,px,py,pz ) dynamics of such
+bunches are strongly coupled through collective space charge
+(SC) forces. At higher energies, especially in bunch
+compressors where charged particle trajectories are curved
+through magnetic chicanes the dynamics are coupled through
+collective coherent synchrotron radiation (CSR).
+A 2 nC bunch contains N ≈ 1.25 × 1010 electrons for which
+calculating exact individual particle to particle SC and CSR
+interactions is a computationally expensive O(N2) process. For
+SC calculations, an O(N2) process, such as the SpaceCharge3D
+routine in the particle dynamics simulation code General
+Particle Tracer (GPT) may be necessary for intense low energy
+beams near the injector where the longitudinal (z) velocities
+of individual particles in a bunch have a large variation and are
+comparable to transverse (x,y) velocities [33,34]. For
+relativistic particles, many conventional approaches for SC
+calculations greatly reduce the number of required
+calculations by utilizing particle-in-cell methods with macro-
+particles, such as the SpaceCharge3DMesh routine in GPT. For
+CSR, relativistic state-of-the-art 3D CSR calculations still rely
+on a full set of point-to-point calculations [35].
+A charged particle’s electromagnetic Lagrangian is
+𝐿 = − 𝑚𝑐!
+𝛾
++ 𝑒𝑣 ∙ 𝐴 − 𝑒𝜑,
+𝛾 =
+1
+01 − 𝑣!/𝑐! ,
+(1)
+where e is the particle’s charge, c is the speed of light, v = |v|,
+and A and φ are the vector and scalar potentials, respectively,
+which define the magnetic (B) and electric (E) fields as
+𝑩 = ∇ × 𝑨,
+𝑬 = −∇𝜑 − 𝜕𝑨
+𝜕𝑡 , (2)
+For which the relativistic Lorentz force law is
+𝑑𝒑
+𝑑𝑡 = 𝑒(𝑬 + 𝒗 × 𝑩),
+𝒑 = 𝛾𝑚𝒗. (3)
+The E and B dynamics are coupled and depend on current and
+charge densities J and ρ as described by Maxwell’s equations
+∇ ∙ 𝑬 = 𝜌
+𝜀"
+,
+∇ ∙ 𝑩 = 0, (4)
+∇ × 𝑬 = − 𝜕𝑩
+𝜕𝑡 ,
+∇ × 𝑩 = 𝜇" F𝐽 + 𝜀"
+𝜕𝑬
+𝜕𝑡H. (5)
+A typical approach to numerically solving Eq. 1-5 starts with
+initial charge ρ(x,y,z,t = 0) and current profiles J(x, y, z, t = 0)
+and their rates of change as well as any external electric and
+magnetic fields Eext(x,y,z,t = 0), Bext(x,y,z,t = 0) and their
+rates of change, which may be produced by mag- nets and
+radio-frequency resonant acceleration cavities as is typical in
+high intensity charged particle accelerators. The total
+electromagnetic fields are then calculated as the sum of the
+external fields and the self-fields produced by the current
+and charge densities themselves according to Eq. 4, 5. The
+initial fields apply a force on the particles causing a change
+in momentum and position, as defined by Eq. 3. The most
+computationally expensive part of the process is the
+calculation of the self-fields generated by the particle
+distribution.
+In this work, we introduce a physics-constrained
+neural net- work (PCNN) approach to solving Maxwell’s
+equations for the self-fields generated by relativistic charged
+particle beams. For example, for the problem of mapping
+current density J to an estimate Bˆ of the associated
+magnetic field B we build Eq. 2 into the structure of our NN
+and generate the vector potential Aˆ , which defines the
+magnetic field as
+𝐁K = ∇ × 𝑨K ⟹ ∇ ⋅ 𝑩K = ∇ ⋅ N∇ × 𝑨KO = 0, (6)
+which satisfies the physics constraint by construction.
+
+
+
+Fig 1: Various NN approaches to generate a magnetic field B^ (an
+estimate of B) from current density J are shown.
+Neural networks (NN) are powerful machine learning (ML)
+tools which can extract complex physical relationships
+directly from data and have been used for speeding up the
+studies of complex physical systems [36-43]. Incredibly
+powerful and flexible physics-informed neural networks
+(PINNs), which include soft constraints in the NN’s cost
+function, have been developed and have shown great
+capabilities for complex fluid dynamics simulations [44],
+material science [45], for symplectic single particle tracking
+[46], for learning molecular force fields [47], and for large
+classes of partial differential equations [48-50].
+
+For the problem of mapping current density J to an
+estimate B^, the PINN approach is to train a neural network
+with a cost function defined as
+𝐶 = 𝑤# ST𝐵 − 𝐵VT
+!𝑑𝑉 + 𝑤∇ ST∇ ∙ 𝐵VT
+!𝑑𝑉
+ = 𝑤#X𝐵 − 𝐵VX! + 𝑤∇X∇ ∙ 𝐵VX!, (7)
+where the first term depends on magnetic field prediction
+accuracy and the second term penalizes violation of the
+physics constraint ∇ · Bˆ = 0, as shown in Figure 1. However,
+with soft PINN-type constraints there is no guarantee that
+the constraints are always satisfied, which is in contrast to
+the hard constraints implemented in our approach, which
+guarantee that constraints are not violated within numerical
+and finite discretization limits. Furthermore, when utilizing
+PINN-type soft constraints there is a tradeoff between the
+minimization of the two terms in Eq. 7 based on the choice
+of weights wB and w∆. Intuitively this tradeoff can be
+understood by the fact that the easiest way for a neural
+network to satisfy ∇ · Bˆ = 0 is Bˆ ≡ C for any constant C. For
+hard constraints there is no such tradeoff, the cost function
+only penalizes field accuracy and the constraint itself is built
+into how the field is constructed. In our PCNN approach, our
+cost function is simply
+𝐶 = X𝐵 − 𝐵VX!, (8)
+and there is no tradeoff between reconstruction accuracy and
+physics constraint enforcement.
+
+This is important because when simulating charged
+particle dynamics, great care must be taken to satisfy the
+physics constraints as defined by Eq. 1-5. It is very important
+to enforce well known beam properties such as phase space
+volume-preserving symplectic maps that satisfy Liouville’s
+theorem so that the beam dynamics are self-consistent [51-
+56]. Results on physics informed NNs with hard constraints
+have mostly focused on fluid dynamics and climate modeling
+and are much more limited than PINN approaches [57-61].
+
+PHYSICS-CONSTRAINED NEURAL NETWORKS
+In Figure 1 we summarize three NN approaches: 1) a NN
+approach without physics constraints, 2) a PINN approach
+with soft constraints, and 3) our PCNN approach. We demon-
+strate our PCNN method with numerical studies of relativis-
+tic (5 MeV), short (σt = 800 fs), high charge (2 nC) electron
+bunches represented by N = 50 million macro particles. We
+utilize the charged particle dynamics simulation code
+General Parcticle Tracer (GPT) with 3D space charge forces
+[33,34]. The charged particle distributions were simulated
+for 1.2 ns with all of the data saved to a file at each ∆t = 12
+ps interval so that the beam was displaced 0.36 m over 100
+saved steps.
+The Figure 2 (A) shows the x and y trajectories of 10000
+random particles sampled from the bunch distribution over
+the entire 100 saved steps as the beam is compressed by a
+0.5 T solenoid magnet whose Bz field is shown in green. Only
+the first 75 steps, shown in black, were used for training and
+the final 25 steps were used for testing, shown in red. Figure
+2 (B) shows the (x,y) and (x,z) projections of the electron
+bunch density at steps 0 and 74.
+The training beam we have created is designed to have mul-
+tiple length scales in order to help the trained PCNN gen-
+eralize to new unseen distributions. We have created sev-
+eral closely spaced Gaussian bunches of varying σx and σy as
+seen in the (x,y) projection of step 0. Furthermore as seen in
+the (x,z) projection the beam has an overall bunch length of
+σz = 800 μm with density fluctuations of various σz along the
+length of the beam. By step 75 the beam has been over-
+compressed in the (x,y) plane as seen by the (x,y) projection
+and the beam density has started to spread in the z direction
+due to space charge forces.
+At each time step we generate discrete versions of J, ρ, E,
+and B by breaking up the 2.4 mm × 2.4 mm × 4.4 mm vol-
+ume which is co-moving with the center of the beam into a
+128×128×128 pixel cube with sides of length ∆x = 18.9 μm,
+∆y = 18.9 μm, ∆z = 34.6 μm, and averaging over all of the
+macroparticles in each cube. We compare the three neural
+network approaches to map J to B, as shown in Figure 1: 1)
+A standard NN using (8) as the cost function for training, 2)
+A PINN using (7) as the cost function, and 3) A PCNN us- ing
+(8) as the cost function with the physics constraint built into
+the structure of the ML approach. The NN, PINN, and PCNN
+are able to achieve similar errors on the training data as they
+all use a similar 3D convolutional neural network (CNN)
+encoder-decoder architecture, as shown in Figure 3.
+There is however an important distinction in terms of neural
+network size when comparing the NN, PINN, and PCNN
+approaches. The PCNN is actually smaller while achieving
+
+NN Input
+NN
+NN Output
+J
+-
+B
+B
+C = IB- Bl2
+PINN
+-
+T
+Cost Input
+B
+C=WBlB-BI2+WAl/V.B|l2
+PCNN
+T
+A→B=V×A
+C = IB- B2
+Fig 2: (A) 10000 randomly chosen macroparticle trajectories as they are compressed by a solenoid magnet (green). Initial 75 steps are used for training
+(black) and final 25 for testing (red). (B) Initial and compressed beam charge density (x,y) and (x,z) projections.
+
+Fig 3: (A) A deep convolutional neural network-based encoder-decoder architecture is used with a 128 × 128 × 128 pixel 3D input. (B) The relatively
+small 8 × 8 × 8 latent space at the center of the network ensures that each pixel is a function of every other pixel in the 3D input. (C) The latent space
+volume is then expanded back up to the original size in the generative half of the network.
+
+Fig 4: Comparison of mean absolute errors of field errors and of field
+divergence for the three approaches, normalized by the maximum error
+obtained by the standard NN approach. For the PINN approach there is
+a tradeoff between the accuracy of the prediction and violation of the
+soft constraint.
+better test data results and a much smaller violation of the
+physics constraint. For the NN and PINN all three
+components (Jx, Jy, Jz) of J must be used as inputs to generate
+all three components (Bx^,By^,Bz^) of B^. Therefore, the
+inputs and outputs of this 3D CNN are objects of size
+128×128×128×3 and the input-output mapping of this 3D
+CNN, NB, is given by
+[𝐽%, 𝐽&, 𝐽'\ → 𝑁# → [𝐵V%, 𝐵V&, 𝐵V'\. (9)
+However, for the PCNN we are generating an estimate of A,
+which satisfies
+𝑨(𝒓, 𝑡) = 𝜇"
+4𝜋 S 𝑱(𝒓(, 𝑡)
+|𝒓 − 𝒓′| 𝑑)𝒓(, (10)
+and therefore, each component of A only depends on the
+corresponding component of J and the dependence of each
+com- ponent of A on J has the same functional form.
+Therefore, we are able to train just a single one-channel 3D
+CNN, NA, which takes only one input at a time of size
+128×128×128×1, and the input-output mapping is
+𝑱∎ → 𝑁+ → 𝐴e∎,
+∎ ∈ {𝑥, 𝑦, 𝑧}, (11)
+as shown in Figure 3. A single channel 3D CNN approach
+results in a smaller network with fewer weights and also
+effectively triples the amount of training data seen by a
+single
+network
+which
+helps
+with
+generalization.
+Furthermore, the memory requirement is significantly
+smaller, allowing for larger batch sizes and bigger overall 3D
+volumes, which will be especially important for enabling
+future work utilizing even larger 3D objects of up to 10243
+pixels. When utilizing even the most expensive GPU
+workstations, which can comfortably sit in one’s office, going
+up to volumes of 10243 pixels uses up so much GPU memory
+that the number of 3D convolutional layers that can be
+utilized in a 3D CNN is greatly diminished and going beyond
+this will probably require the use of HPC clusters of many
+GPUs.
+For ∇×A we estimate ∂/∂x at pixel (i, j,k)∈{1,...,128}3
+as
+𝜕𝐴∎
+,-.
+𝜕𝑥
+= 𝐴∎
+(,01,-,.) − 𝐴∎
+(,41,-,.)
+2∆%
++ 𝑂N∆%
+!O, (12)
+where ∎ ∈ {𝑥, 𝑦, 𝑧}, and similarly for ∂/∂y and ∂/∂z. The dif-
+ference computation in Eq. 12 is implemented as a single
+non- trainable 3D convolutional layer with custom-designed
+weight tensor, W∂x, such that the 3D convolution applied
+
+A
+B
+1
+L
+Step
+Step 75
+1
+[mm]
+y[mm]
+0
+0
+0
+1
+-i
+0
+i-2
+2-1
+0
+0.0
+0.1
+0.3
+0
+1-2
+0.0
+0.1
+0.2
+0.3
+0.2
+0
+2
+z [mm]
+z [mm]
+y [mm]
+z [mm]
+y [mm]
+z [mm]A
+B
+Ja(r,t)
+Ar(r,t)
+Jy(r,t)
+16
+32
+128
+32
+16
+Au(r,t)
+(8,8,8) (8,8,8) (8,8,8)
+J(r,t)
+(16,16,16)
+(16.16.16)
+Az(r,t)
+(32,32,32)
+(32,32,32)
+(64,64,64)
+(64,64,64)
+(128,128,128)
+(128,128,128)
+(128,128,128)
+(128,128,128)
+conv3D (3,3,3)
+maxpool3D
+up-sampling 3D
+leaky relu (0.1)
+dense + leaky relu (0.1)
+21
+2
+conv3DT (3,3,3)
+batchnormalization
+NField error [normalized]
+1.2
+—B-B2,N
+1.0
+B - B|12, PINN
+0.8
+//V × A- BI|2,PCNN
+0.6
+0.4
+0.2
+0.0
+Divergence [normalized]
+1.0
+1/V·BI2NN
+0.8
+— I/V·B2,PINN
+0.6
+— /V·(V ×A)I|2,PCNN
+0.4
+0.2
+0.0
+0
+20
+40
+60
+80
+100
+step
+Fig 5: The PINN approach can be made to have smaller divergence by increasing the divergence weight w∆ in Eq. 7, but the tradeoff is decreased field
+prediction accuracy. The PCNN approach has no such tradeoff as the entire cost function is based on the accuracy of the field reconstruction and the
+constraint is built into the structure of the approach. Here we show several views of ∇ · B for the beam at step 90, far beyond the end of the training
+set. Top Row: Slices of ∇·B in the (x,y) plane at z = 0 and in the (x,z) plane at y = 0 are shown for the three methods. Bottom Row: ∇ · B is shown
+projected onto the (x, y) and (x, z) planes by summing over all z and y values, respectively. The sharp rectangular boundaries of high divergence values
+are caused by numerical issues at locations where the beam density suddenly drops to zero. This is due to the fact that a slight non-zero background
+of particles was initially generated within a rectangular volume as the initial conditions for the GPT simulation and those particles have now been
+compressed and rotated by the solenoid magnet.
+to a volume gives the partial derivative V →V∗W∂x =∂V/∂x,
+where each pixel Vijk in the input volume is replaced by a
+local 3×3×3 convolution Vijk →∑iʹjʹkʹVi’j’k’ W∂x i’j’k’ with W∂x
+defined as
+𝑊5% = q
+0
+0
+0
+0
+−
+1
+!∆!
+0
+0
+0
+0
+r × s
+0
+0
+0
+0
+0
+0
+0
+0
+0
+t × q
+0
+0
+0
+0
+1
+!∆!
+0
+0
+0
+0
+r, (13)
+and similarly for W∂y and W∂z. With this approach all of our
+computations are performed within a 3D CNN utilizing auto-
+matically differentiable GPU-enabled TensorFlow libraries
+[62]. Evaluation of a single forward pass
+𝑱 → 𝑁+ → 𝑨K → 𝑩K = ∇ × 𝑨K (14)
+requires only milliseconds for large 128 × 128 × 128 pixel 3D
+objects when running our 3D CNN on two NVLinked 80 GB
+RAM NVIDIA A100 GPUs in a high performance desk- top
+workstation. By comparison, the SpaceCharge3DMesh
+method of GPT for these simulations running on a high-
+performance desktop with dual Intel Xeon Platinum 8276
+CPUs with 28 cores and 56 threads per CPU required more
+than one minute per calculation of a ∆t = 12 ps interval.
+
+RESULTS
+The results of training on the first 75 steps and then testing
+on the last 25 are shown in Figure 4. While all of the
+networks learn how to reproduce the magnetic field, the
+PCNN does the best job of respecting the physics constraint
+∇ · B = 0. The PINN, depending on the weights chosen in wB,
+wΔ in Eq. 7, can make either one of the costs arbitrarily small,
+but there is an inherent tradeoff between the two. The PCNN
+simultaneously reconstructs the fields with high accuracy
+while always satisfying hard physics constraints. The PCNN
+also performs better on both reconstruction accuracy and
+physics constraints on the unseen test data.
+Our approach is to generate a general vector field F and then
+calculate the curl ∇ × F and train the 3D CNN such that ∇ × F
+matches the magnetic field B. By doing this we have built in
+the hard constraint that the only representations of
+magnetic fields that we can construct, B^, are the curl of a
+vector field F. For a twice continuously differentiable vector
+field F this guarantees
+∇ ∙ 𝑩K = ∇ ∙ (∇ × 𝑭) = 0. (15)
+Once the CNN is trained and B^ represents a magnetic field,
+we interpret F as an estimate of the vector potential A^, due
+to the fact that B^ = ∇ × A. Note that due to the finite
+discretization, which in our case is 128×128×128, and
+numerical limitations, our vector fields are not perfectly
+continuous or differentiable and can slightly violate ∇ · B^ =
+0 due to our discretized implementation. This is especially
+apparent in low-density regions near the edges of the beam
+where the current and vector densities can discontinuously
+drop to zero from one pixel to the next, introducing
+numerical errors in the derivatives. This can be seen most
+clearly in the bottom row of Figure 6. In Figure 5 the (x, y)
+and (x, z) projections of slices through the center z=0 and y=0
+of the beam are shown for ∇·B^ for the three approaches.
+We also project ∇ · B^ onto the (x, y) plane by summing over
+all z values and also where we project onto the (x, z) plane
+by summing over all y values.
+After we first generated our particle distributions in GPT and
+ran the simulations, we had to choose a finite volume to
+discretize into a 128×128×128 pixel grid to create the 3D
+density objects for the 3D CNN. We chose dimensions
+∆x∆y∆z of approximately 1 mm3 because that captured the
+vast majority of the beam in the (x, y, z) dimensions through-
+out its evolution through the solenoid. One limitation of this
+approach which now becomes evident is the cut off the low-
+density particle regions outside of the chosen volume.
+
+V·B
+VB, PINN
+V. (V × A), PCNN
+V·B
+V·B, PINN
+V. (V× A), PCNN
+0.010
+1.0
+0.5
+[mm]
+0.0
+0.005
+0.5
+-1.0
+ZV·B, PINN
+Zv (V×A), PCNN
+Z·B, PINN
+Z. (× A), PCNN
+0.000
+1.0
+0.5
+[mm]
+-0.005
+0.0-
+0.5
+-1.0
+-1
+-0.010
+0
+1
+-1
+0
+1
+-1
+0
+1
+-1
+0
+1
+-1
+0
+1
+-1
+0
+1
+y [mm]
+y [mm]
+y [mm]
+z [mm]
+z [mm]
+z[mm]
+Fig 6: The top row is the initial beam state with the (x,y) projection of Bˆ as generated by the PINN and PCNN shown along with B and the differences
+plotted over the (x,y) projection of the beam’s charge density ρ. For training data the methods perform equally well in field reconstruction accuracy.
+In the middle row we see the first step (76) beyond the training data set and an immediate drop in the accuracy of the PINN. In the third row the (x,z)
+projections of step (76) are shown, the roughness of the (x,z) projection shows intuitively how the PINN matches the B field in a mean squared error
+sense, but violates the constraint ∇ · B = 0.
+
+Therefore our initial particle distribution can be thought of
+as an intense beam surrounded by a cube-shaped halo of
+diminishing density. This is due to the fact that we defined
+our initial beam in terms of Gaussian distributions without
+any hard cut- offs. Once this cube shaped region begins to
+travel through the solenoid it is rotated and squeezed,
+resulting in regions of non-zero and zero density that have
+sharp straight contours, as can be seen in the bottom part of
+Figure 5 and cause numerical problems for calculating
+derivatives. The most obvious mitigation for this would be to
+create a mask that cuts off all field calculations related to the
+beam beyond some minimal cutoff density. Despite this
+limitation, which each 3D CNN-based approach will suffer,
+the PCNN approach can be seen to be more accurate than
+NN without constraints and also than the PINN approach. In
+Figure 6 we compare PINN and PCNN predictions for two
+states of the beam, one within the training data set for which
+both are highly accurate and one beyond the training set
+where the accuracy quickly drops off. The next step is to add
+a prediction E^ of the electric field E. We generate φ from ρ
+via a second neural network Nφ which gives the mapping
+𝜌 → 𝑁7 → 𝜑v. (16)
+As above, we approximate ∂/∂t as
+𝜕𝜜
+𝜕𝑡 = 𝐴(𝑡 + ∆8) − 𝐴(𝑡 − ∆8)
+2∆8
++ 𝑂N∆8
+!O, (17)
+where ∆t=1.2×10-11. After ∂A^/∂t is calculated, a single
+forward pass for E^ is given by
+{𝜌, 𝑱} → [𝑁7, 𝛮9\ → [𝜑,y 𝐴e\ → 𝑬K = −∇𝜑v − 𝜕𝜜K
+𝜕𝑡 ,
+(18)
+as shown in Figure 7.
+Because uncountably many non-unique choices of A and φ
+generate the same E and B fields, we add the Lorenz gauge
+as a PINN-type soft constraint to the training cost function
+𝑤#X𝐵 − 𝐵VX! + 𝑤:X𝐸 − 𝐸VX! + 𝑤; {∇ ∙ 𝑨K + 1
+𝑐!
+𝜕𝜑v
+𝜕𝑡 {
+!
+, (19)
+which has the additional benefit that it introduces more data
+for the magnetic field calculation as the magnetic field is now
+informed by the Lorenz condition. Predictions for the entire
+3D beam at step 1 by the Lorenz PCNN are shown in Figure
+8. In Figure 9 we show the Lorenz PCNN-generated (B^, E^ )
+fields at just a single 2D (x, y) slice at various steps including
+those beyond the training data.
+DISCUSSION
+Our final demonstration of the strength of building in hard
+physics constraints in the 3D CNN is a demonstration of its
+non-catastrophic
+failure
+when
+predicting
+the
+electromagnetic fields of two additional 2 nC beams that are
+very different from both the test and training data shown so
+
+True Value-PCNN
+PCNN
+True Value
+PINN
+TrueValue-PINN
+1.0
+(B-(V×A)v, Bx-(V×A))
+(V ×A)y, (V ×A)
+(By, Bx)
+(By - By, Bx - Bx)
+(By, Bx)
+0.2
+0.5
+0.1
+0.0
+-0.1
+0.5 -
+0.2
+Step
+-1.0
+1.0
+(By -(V×A)y,Bx -(V×A))
+((V× A)v, (V× A))
+(By, Bx)
+(By, Bx)
+(By - By,Bx-Bx)
+0.2
+0.5
+0.1
+[mm]
+0.0
+0.0
+-0.1
+-0.5 -
+0.2
+Step76
+-1.0
+-1.0-0.50.0
+1.0
+-1.0 -0.50.0
+0.5
+1.0
+-1.0-0.50.0
+0.5
+1.0
+-1.0-0.5
+0.0
+0.5
+1.0
+0.0
+0.5
+1.0
+y [mm]
+y [mm]
+y [mm]
+y [mm]
+y [mm]
+1.0
+(B, -(V×A),,Bx-(V×A) )
+((V×A)z, (V× A)x)
+(Bz, Bx)
+(Bz - Bz,Bx- Bx)
+(Bz° Bx)
+0.2
+0.5
+0.1
+0.0
+0.1
+-0.5
+0.2
+Step 76
+1.0
+-2
+-1
+0
+1
+2
+-2
+-1
+0
+1
+2
+-2
+-1
+0
+1
+2
+-2
+-1
+0
+1
+2
+-2
+-1
+0
+1
+2
+△z [mm]
+△z [mm]
+△z [mm]
+△z [mm]
+△z [mm]
+Figure 7: The Lorenz PCNN generates the vector and scalar potentials and their associated electromagnetic fields.
+far. The first additional beam is three parallel electron
+beams, each of whose length is σz ≈ 3mm which is similar to
+the overall length of the various bunches used in the training
+data. The three parallel beams are different from the
+training data in having empty space between the individual
+bunches. The second additional beam is a hollow tube of
+electrons with the same length as the three parallel beams,
+but whose topology is entirely different from anything the
+PCNN has seen so far. Figure 10 shows results of predicting
+the (E, B) fields for the three parallel and the hollow beams.
+As expected, the PCNN performs worse than previously, but
+is qualitatively very accurate in both E and B field prediction
+for the three parallel beams. The hollow beam is a much
+bigger challenge and much larger field errors are seen, but
+crucially, the predicted fields are still qualitatively correct in
+terms of direction and flow, with most of the error due to
+the wrong amplitude being predicted. We believe that this
+final result honestly shows the generality and strength of the
+PCNN approach and its limitations. We should not expect a
+trained CNN to predict well on an entirely unseen data set,
+this is a well-known problem known as distribution shift in
+the ML community in which NNs must be re-trained for
+inputs different from the training data set distribution. The
+fact that the PCNN produces reasonable outputs for inputs
+wildly different from that of the training data is a major
+strength of the approach. As Maxwell’s equations are
+important for describing an extremely wide range of physical
+phenomenon the applications of such a method for
+electrodynamics applications are many. Here we will briefly
+touch on charged particle dynamics in high energy
+accelerators. There is a growing literature on utilizing ML-
+based surrogate models as virtual diagnostics in the particle
+accelerator community. For these approaches the NNs are
+typically trained as input-output maps utilizing experimental
+input data together with computationally expensive output
+data such as measuring a charged particle current density at
+one location of an accelerator and then running physics
+codes to map that to another location [41], or mapping input
+accelerator data directly to beam characteristics at other
+accelerator locations [39]. For such applications, the PCNN
+method can enable the development of much more robust
+real-time virtual diagnostics that satisfy physics constraints.
+Another large family of applications is for accelerator design.
+For example, given a fixed input beam charge and current
+density distributions a beam line may be designed with
+various electromagnetic magnet and resonant acceleration
+components. For each design choice, such as distance
+between magnets or magnetic field strengths, high-fidelity
+physics-based models must be used to track the charged
+particle dynamics. With our approach, once a PCNN is
+trained for a family of input beam distributions, we have
+demonstrated that we can make accurate field predictions
+that respect physics constraints even as the beam is
+significantly changed by the application of external fields
+based on the accelerator’s design. The next step of this work,
+which is beyond the scope of this paper and an ongoing
+effort, is to utilize our PCNN approach to quickly push
+particles and to confirm that the field predictions are
+accurate enough such that the particle dynamics are
+physically consistent. As we have already seen some slight
+numerical limitations as discussed above, this might push us
+to utilize even higher resolution discretization, such as 5123
+or 10243 pixel volumes, which remains to be determined. If
+this approach is able to provide physically consistent beam
+dynamics, even if they slightly violate constraints, this will be
+a fast and powerful way to zoom in on an optimal design
+estimate, after which more accurate slower physics-based
+simulations can be used for detailed studies.
+
+CONCLUSIONS
+A robust PCNN method has been developed in order to
+explicitly take Maxwell’s equations into account in the
+structure of generative 3D convolutional neural networks in
+order to more accurately satisfy physics constraints.
+Although this method is less general than the incredibly
+flexible PINN approach, in which any partial differential
+equation can be easily introduced as a soft constraint, the
+resulting physics constraints are more accurately respected.
+Furthermore, we have shown how to combine this PCNN
+approach with the PINN approach in our Lorenz CNN in
+which hard physics con- straints are enforced in the
+generation of the E and B fields and the soft penalty on
+violation of the Lorenz guage is added to the cost function in
+Equation 19.
+
+PCNNLorenz
+NN Input 1
+NN Output 1
+Cost Input 1
+B
+B
+J
+B=V×A
+NN Input 2
+Cost Input 2
+0A
+NN Output 2
+E
+F
+=-V6
+PCNN training cost function
+1 p
+C=WB|B-B2+WE|E-Ell2+WL
+c2
+12
+Fig 8: (A) Electromagnetic fields are shown for all positions within the 1283 pixel volume for normalized charge density ρ >
+0.0025 for the first state of the beam. (B) We zoom in on only the part of the electron bunch which has the largest σz profile
+and show it from two angles. (C) Fields from only a single (x, y) slice of the 3D volume are shown at two different angles.
+
+ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddFST4oBgHgl3EQfEzgN/content/2301.13715v1.pdf'}
+page_content='0025 for the first state of the beam.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddFST4oBgHgl3EQfEzgN/content/2301.13715v1.pdf'}
+page_content=' (B) We zoom in on only the part of the electron bunch which has the largest σz profile and show it from two angles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddFST4oBgHgl3EQfEzgN/content/2301.13715v1.pdf'}
+page_content=' (C) Fields from only a single (x, y) slice of the 3D volume are shown at two different angles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddFST4oBgHgl3EQfEzgN/content/2301.13715v1.pdf'}
+page_content=' 0. Then we can achieve
+R(T) ≲ inf
+ǫ>0 ǫT +
+�
+N(ǫ)T.
+(5)
+To achieve this, simply compute an ǫ-covering of H and let the leader play no-regret algo-
+rithms on the ǫ-covering set. Note that although the covering is constructed for pair of actions
+(a, b) ∈ Aǫ×Bǫ, it suffices for the leader to run no-regret algorithms on actions Aǫ. The detailed
+algorithm and proof are given in Appendix A.2.
+This upper bound is achieved when the leader does not even utilize the observations of the
+follower’s responses. Indeed, in the worst case (e.g., in Example 3.2), the responses will not
+provide information.
+As a corollary, in the linear regime with HΘ,φ, the covering number is N(ǫ) = N(Θ, ǫ, ∥·∥) ≤
+exp
+�
+O
+�
+d log 1
+ǫ
+��
+Wainwright [2019].
+Choosing ǫ ≍ T −1/(d+2), Theorem 3.3 reduces to the
+following upper bound in the linearly parameterized case.
+Corollary 3.4. In the linear case, we can achieve R(T) ≲ T (d+1)/(d+2).
+In other words, the sample complexity for achieving average regret equal to ǫ is upper
+bounded by
+�
+O
+�
+1/ǫ
+�d+2�
+.
+This upper bound is agnostic to any structural property of the
+feature function φ, such as smoothness or even continuity.
+6
+
+4
+UCB with side observations
+Although the worst-case sample complexity for linear Stackelberg games is exponential, it is
+possible to obtain a fine-grained analysis and improved rate for the family HΘ,φ when φ is
+better structured. A natural choice of algorithm for the leader is some variant of UCB that
+incorporates observations of the follower’s actions. In this section, we will describe a general
+recipe for a family of UCB algorithms to incorporate the side information as well as the challenge
+in their design.
+4.1
+Algorithm description
+We consider the following variant of UCB that uses the follower’s responses as side information
+to improve the confidence set.
+Algorithm 1 UCB with side information from expert
+Input: Regression oracles Reg(b) and Reg(r) on reward and response, {αt}t∈[T], {βt}t∈[T]
+for t = 1 to T do
+Compute h(b)
+t
+= Reg(b)(ˆb1, . . . ,ˆbt−1) and h(r)
+t
+= Reg(r)(r1, . . . , rt−1)
+Set H(b)
+t
+:= {h : �t−1
+i=1 ∥b∗
+h(ai) − b∗
+h(b)
+t
+(ai)∥2 ≤ α2
+t }
+Set H(r)
+t
+:= {h : �t−1
+i=1
+�
+h(ai) − h
+(r)
+t (ai)
+�2 ≤ β2
+t }
+Construct confidence set Ht = H(b)
+t
+∩ H(r)
+t
+Take action at ∈ arg maxa∈A suph∈Ht h(a)
+Observe (noisy) reward rt and response ˆbt
+end for
+Remark 4.1. The regression oracles and the sequences {αt}t∈[T], {βt}t∈[T] must be chosen ap-
+propriately so that the following condition holds: Given an error tolerance δ ∈ (0, 1), we require
+h⋆ ∈ �T
+t=1 Ht with probability at least 1 − δ.
+Remark 4.2. A common choice for Reg(b) and Reg(r) is the least-squares regression oracle that
+computes
+h(b)
+t
+∈ arg min
+h∈H
+t−1
+�
+i=1
+∥b∗
+h(ai) − ˆbi∥2
+(6)
+and
+h(r)
+t
+∈ arg min
+h∈H
+t−1
+�
+i=1
+(h(ai) − ri)2.
+(7)
+When the least-squares computation becomes infeasible under complex response-reward struc-
+tures (this is common for (6)), custom oracles need to be designed. A more intricate approach
+may be to jointly construct the estimate using both {ˆbτ}τ∈[t−1] and {rτ}τ∈[t−1]. We leave it for
+future research to study systematic designs of the oracles and the confidence sets.
+Remark 4.3. When the responses are unobserved or ignored (e.g., by choosing αt = ∞),
+Algorithm 1 reduces to the classic Eluder UCB using the least-squares (reward) oracle with
+Ht = H(r)
+t
+Russo and Van Roy [2013].
+The choices of {αt}t∈N and {βt}t∈N can pose another challenge. An naive attempt to get a
+generic upper bound on αt is to use a covering argument as in Russo and Van Roy [2013] using
+the following measurement between two functions h, h′ ∈ H: d(b)(h, h′) = supa ∥b∗
+h(a) − b∗
+h′(a)∥.
+But note that this does not necessarily define a norm, and further the covering number of H in
+this sense can be infinite when the best response is discontinuous in the leader’s action a. Thus,
+such an approach is often not useful and one may have to determine αt on a per instance basis.
+7
+
+4.2
+Examples
+While Theorem 3.1 shows that the involvement of the omniscient follower can lead to “curse
+of expertise,” a stark deterioration in the sample complexity, there are many scenarios where
+the leader’s observation of the follower’s responses can expedite learning significantly. In this
+section, we will explore a few such examples.
+4.2.1
+An imitation-based example
+Let us consider a setting where the leader achieves efficient learning through imitation. Heuris-
+tically, imitation arises when the optimal action for the leader is equal to the best response for
+the omniscient follower or a function of it. This may capture, for instance, real-world robotics
+applications where the actions of the robot and the human expert are exchangeable and the
+true goal can be easily inferred from the expert’s action. A simple scenario is when the robot
+and the human expert are supposed to carry out the same task perfectly, in which case the
+robot should simply treat the expert as a role model and imitate. The following is a concrete
+example.
+Example 4.4. Let A = B = Θ = Sd−1 (or Bd equivalently)2. Consider the linearly parameter-
+ized function class HΘ,φ with feature function
+φ(a, b) = a + b.
+(8)
+Here, the optimal response b∗
+θ ≡ θ is independent of a, and hθ(a) = θ · a + 1.
+Construction of confidence sets.
+The (noisy) observations of the follower’s best responses
+simplify the problem into an imitation learning task. A simple oracle for the best-response obser-
+vations is to take the A-projected empirical average of responses, i.e., θ(b)
+t
+= ΠA
+� 1
+t−1
+�t−1
+i=1 ˆbi
+�
+.3
+The response-based confidence set reduces to
+Θ(b)
+t
+=
+�
+θ ∈ Θ
+���∥θ − θ(b)
+t ∥ ≤
+αt
+√t − 1
+�
+.
+Standard sub-Gaussian concentration results suggest that the (Euclidean) radius of this confi-
+dence set shrinks at a rate of t−1/2.
+Lemma 4.5. To ensure θ⋆ ∈ �
+t∈[T] Θt with probability at least 1 − δ, it suffices to choose
+αt = Θ
+�
+σb
+�
+d + log T
+δ
+�
+.
+UCB chooses actions on Sd−1 increasingly close to the empirical estimate θ(b)
+t .4 The regret
+bound follows from these choices of confidence sets.
+Proposition 4.6. In Example 4.4, UCB achieves a regret bound
+RUCB(T) ≲ σ2
+b log T · (d + log T).
+(9)
+In other words, the average regret decays at a rate of �O(σ2
+bd/T). This has also been analyzed
+in the setting of imitation learning [Rajaraman et al., 2021], and the results are consistent.
+2While it is customary to consider Θ = Bd, we will observe below that the imitation-based algorithm does not
+crucially rely on ∥θ⋆∥ and only incurs smaller regret if ∥θ⋆∥ < 1. This is because the algorithm asymptotically
+relies solely on the response observations, which are invariant under scaling of θ⋆.
+It is also without loss of
+generality to restrict all actions to the sphere.
+3Define the projection of y ∈ Rd onto a closed set X ⊆ Rd as ΠX(y) := arg minx∈X ∥y − x∥, breaking ties
+arbitrarily when the minimizer is not unique.
+4Even simpler, the leader can play the A-projected empirical average of responses.
+Under our choice of
+constant α, the analysis will be the same, with the result differ by at most a constant factor.
+8
+
+Remark 4.7. When the follower’s responses are unobserved, this is simply a linear bandit,
+where the minimax regret is Ω(σbd
+√
+T) ≫ O(σ2
+bd log2 T). This indicates the value of the bt
+observations. When the follower’s response is noiseless, one can see that a single sample suffices
+to find the optimal response since one always observes b⋆
+θ = θ.
+Remark 4.8. Note the gap in the Θ(log T) regret when the response observations are used and the
+Θ(
+√
+T) regret when they are ignored or unavailable, showing the value of those response observa-
+tions. In fact, it is easy to modify this example slightly (e.g., taking φ(a, b) = max{|θ⊤a|, ∆}b for
+some ∆ ∈ (0, 1)) to create an even larger gap: When the leader uses the response observations,
+the regret is �
+O(d log T) with sample complexity �
+O
+�
+d log 1
+ǫ
+�
+; When the response observations are
+unavailable, the sample complexity increases to Ω(ǫ−d).
+4.2.2
+Expert-guided exploration
+In many scenarios, the omniscient follower’s actions may not directly reveal the exact state of
+the world but still provide crucial information. The next example illustrates a simple setting
+where the follower’s response can significantly reduce the sample complexity.
+Example 4.9. Let A = B = Sd−1 and
+Θ = {(θa, θb) ∈ Sd−1 × Sd−1|θa · θb ≥ ζ}
+for some ζ ∈ (0, 1). Consider the parameterized family of functions HΘ = {hθ|θ ∈ Θ} where
+hθ(a, b) = ReLU(θa · a − ∆) + θb · b,
+for some ∆ ∈ (0, 1). For simplicity, we will assume that the response observations are noiseless
+(i.e., σb = 0), although the noisy case can be analyzed analogously.
+Confidences sets.
+The best response is b∗
+θ ≡ θb, again independent of the leader’s action.
+Upon observing b1 = θb, the leader should construct confidence sets Θ(b)
+t
+= {θa ∈ Sd−1|θa · b1 ≥
+ζ} × {b1}, while Θ(r)
+t
+is chosen as in linear UCB. As a result, all subsequent actions the leader
+takes must fall into
+A1 := {a ∈ A|a · b1 ≥ ζ}.
+(10)
+This refinement of the action set will reduce the sample complexity, and depending on the size
+of ζ relative to ∆, the reduction can be significant.
+Strong reduction.
+When 1 − ζ ≤ (1 − ∆)/4, the leader learns that θa · b1 ≥ ζ. In particular,
+any action a ∈ A1 must satisfy
+θa · a = 2 − ∥θa − a∥2
+2
+≥ 2 − (∥θa − b1∥ + ∥a − b1∥)2
+2
+≥ 2 − (2√2 − 2ζ)2
+2
+= 1 − 4(1 − ζ) ≥ ∆,
+(11)
+and thus h(a) = θa · a − ∆ + 1 behaves as a linear function within A1. By playing UCB within
+A1, the leader reduces the problem to a linear bandit instance and thus achieves the following
+regret bound.
+Proposition 4.10. Assume 1 − ζ ≤ (1 − ∆)/4 in Example 4.9. UCB achieves
+RUCB(T) ≤ �O(d
+√
+T).
+(12)
+This leads to a sample complexity of �
+O(d2/ǫ2), in contrast to the exponential sample com-
+plexity exp(O(d log 1
+ǫ)) if the responses were unobserved. Information from the follower’s re-
+sponse guides the leader’s exploration to the well conditioned part of the action space. Given the
+Ω(d
+√
+T) sample complexity of linear bandits, the upper bound (12) is tight (up to logarithmic
+terms).
+9
+
+Weak reduction.
+When ζ is small relative to ∆, the problem does not immediately reduce
+to a linear bandit, but we have the following improved upper bound.
+Proposition 4.11. There exists an algorithm Alg that achieves
+RAlg(T) ≤ O
+�
+(Cd
+ζ T d+1)
+1
+d+2�
+,
+(13)
+where Cζ :=
+�
+1 − ζ2 ∈ (0, 1).
+This bound improves as ζ decreases. The sample complexity is therefore �O(Cd
+ζ ǫ−d−2), a
+Cd
+ζ reduction compared with the original complexity without observing the responses in Corol-
+lary 3.4.
+Since the reduced problem is still a ReLU bandit, UCB will not be suitable. Instead, (13)
+can be achieved through discretization of A1 as the upper bound in Theorem 3.3.
+5
+Beyond UCB
+Although the UCB algorithm gives a near-optimal rate in most of the above examples. We
+also provide two cases where UCB fails to achieve the optimal rate. This necessitates a tailored
+algorithm design in specific settings.
+5.1
+Nonlinear (polynomial) family
+UCB is known to fail to achieve the optimal rate in the case of the polynomial bandit family
+Huang et al. [2021], where the reward is a polynomial activation on top of a linear family. We
+construct an example which utilizes the structure of the polynomial bandit, formally defined
+below.
+Example 5.1 (Polynomial bandit). Consider the convex function f(x) = x2k for some k ∈ Z+.
+Let
+A = Bd−1, B = [−1, 1], Θ = Bd−1 × {1},
+(14)
+and
+φ(a, b) = (2kba, −f ∗(2kb)),
+(15)
+where f ∗ is the convex conjugate of f. Consider the nonlinearly parameterized family
+HΘ := {hθ(a, b) = f(θ · φ(a, b)) | θ ∈ Θ}.
+(16)
+By properties of the convex conjugate,
+hθ(a) = f(θ−d · a) = (θ−d · a)2k
+(17)
+with the best response
+b∗
+θ(a) = arg max
+−1≤b≤1
+2kbθ−d · a − f ∗(2kb)
+= f ′(θ−d · a)
+2k
+= (θ−d · a)2k−1 ∈ [−1, 1].
+This observation allows us to apply results on polynomial bandits Huang et al. [2021].
+10
+
+Response-regret structure.
+Observe the following properties of the best response function
+in Example 5.1.
+1. The expected reward is a function of the best response, independent of the true parameter.
+Namely,
+hθ(a) = b∗
+θ(a)
+2k
+2k−1 .
+(18)
+This mapping is Lipschitz:
+��hθ(a) − hθ(a′)
+�� ≤
+2k
+2k − 1
+��b∗
+θ(a) − b∗
+θ(a′)
+��,
+(19)
+and further
+arg max
+a∈A
+b∗
+θ(a) = θ ∈ arg max
+a∈A
+hθ(a),
+(20)
+with both maxima being 1.
+2. The response observation, as a degree 2k − 1 polynomial, is more informative than the
+reward observation, a degree 2k polynomial, when the noise levels are the same and θ−d ·a
+is small.
+Based on these two observations, the leader may view the response bt as a proxy reward and
+aim to minimize the proxy regret
+�R(T) :=
+T
+�
+t=1
+1 − b∗
+θ(at).
+(21)
+This is consistent with minimizing the true regret R(T), which differs from the proxy regret
+�R(T) by at most a constant factor by (19).
+Regret bound.
+Using the response observations exclusively to minimize the proxy regret
+�R(T) = �T
+t=1 1 − b∗
+θ(at), the leader reduces her task to a polynomial bandit problem with a
+degree 2k − 1 polynomial activation function. By (19), we may focus on bounding the proxy
+regret. Corollary 3.16 from Huang et al. [2021] suggests that
+�R(T) ≤ �O(
+√
+d2k−1T),
+(22)
+or equivalently the sample complexity is �
+O(d2k−1/ǫ2) for achieving ǫ average proxy regret. The
+following bound on the true regret follows from (19) and (22).
+Proposition 5.2. In example 5.1, there exists an algorithm Alg, using the response observations
+exclusively, that achieves
+RAlg(T) ≤ O(
+√
+d2k−1T).
+(23)
+Proposition 5.2 suggests an �
+O(d2k−1/ǫ2) sample complexity. For instance, the leader can
+achieve this regret with the zeroth-order algorithm proposed in Huang et al. [2021, Algorithm 6].
+Remark 5.3 (Lower bound). Since the reward observations have a higher signal-to-noise-ratio,
+we should expect that the sample complexity of Example 5.1 to be the same order as the sample
+complexity of achieving ǫ average regret in a degree 2k − 1 polynomial bandit. Huang et al.
+[2021] shows that this is lower bounded by Ω(d2k−1/ǫ2). Thus, (23) is essentially optimal.
+Remark 5.4 (Benefit of observing responses). If the leader does not observe the responses, the
+problem is equivalent to a degree 2k polynomial bandit. The optimal regret without observing
+the experts actions will lead to an �O(d2k/ǫ2) sample complexity. Thus, the response observations
+contribute to shaving of a factor of d, which can be significant when the dimensionality is high.
+Remark 5.5 (Suboptimality of UCB). Using the traditional Eluder UCB algorithm leads to a
+suboptimal sample complexity of �
+O(d2k/ǫ2) when the leader solely uses the response observa-
+tions. Still, this is a factor d improvement compared to what she can achieve with UCB without
+the response observations.
+11
+
+5.2
+Failure of the optimism principle
+The next example is adapted from the ReLU bandit in Example 3.2, and shows that optimism-
+based method can have dramatic suboptimality in certain problems.
+Example 5.6. Let A = Bd−1, B = Bd−1 × [0, 1], and
+Θ = {(θ−d, θd) | θ−d ∈ Bd, θd = 1 − ∆}
+(24)
+for some ∆ ∈ (0, 1). Consider the linear family HΘ,φ with
+φ(a, b) = ∥a∥((1 − bd)a, bd − ∥b−d∥) + 1 − ∥a∥
+2
+(b−d, 0).
+(25)
+For any θ ∈ Θ with θ−d ∈ Sd−1, the optimal action for the leader is θ−d, with the follower
+best responding (0, 0) and achieving unit expected reward.
+When ∥a∥ = 1, this function behaves exactly as in Example 3.2, where b∗
+θ(a) = (0, 1)
+whenever θ−d · a < 1 − ∆; When a = 0, the best response is b∗
+θ(0) = (θ−d, bd). Thus, if the
+response observations are noiseless, the leader learns the true parameter and hence the optimal
+action in one round by playing a1 = 0.
+However, any optimism-based method such as UCB will not achieve such efficient learning,
+even when the response are noiselessly observed. It is straightforward to verify that, for any
+action a with ∥a∥ < 1, the optimistic reward satisfies
+sup
+θ∈Θ
+hθ(a) < 1.
+(26)
+Thus, as long as the confidence set contains some θ with θ−d ∈ Sd−1, which holds under our
+initial condition, optimism causes the leader to only take actions a ∈ Sd−1, reducing the problem
+to the worst-case Example 3.2.
+6
+Conclusions
+We have studied a model of online learning in decentralized cooperative Stackelberg games. We
+showed that, even with an omniscient follower who always best responds (myopically), the worst
+case sample complexity for a linear family can be as large as exp(Θ(d log 1
+ǫ)). This “curse of
+expertise” highlights the challenge caused by miscoordinated exploration. This also raises the
+question of how a non-myopic expert follower should respond to the leader’s actions (without
+knowing the leader’s exact algorithm) to expedite their learning and maximize their long-term
+reward.
+We considered the UCB-type algorithm that incorporates response observations.
+A few
+examples of various hardness were considered, ranging from efficient learning through imitation
+and guided exploration to the worst-case linear family example with an exponential sample
+complexity.
+Besides the examples considered in the paper, there are numerous scenarios where the roles
+of the leader and the follower are more complex to reason about. This poses unique challenges
+for both the learning process of the leader and the subsequent analysis of regret, indicating
+a fertile ground for future research. Specifically, our current template of Algorithm 1 requires
+designing the confidence sets based on the specific response-reward structure of each problem. It
+remains open to find a general design (or prove the lack thereof) that systematically synthesizes
+the response and reward observations.
+A general framework of analysis that can provide a
+unified yet sharp upper bound on the examples is also valuable.
+12
+
+References
+Yasin Abbasi-Yadkori, D´avid P´al, and Csaba Szepesv´ari. Improved algorithms for linear stochas-
+tic bandits. Advances in neural information processing systems, 24, 2011.
+Peter Auer, Nicolo Cesa-Bianchi, and Paul Fischer.
+Finite-time analysis of the multiarmed
+bandit problem. Machine learning, 47(2):235–256, 2002.
+Yu Bai, Chi Jin, Huan Wang, and Caiming Xiong. Sample-efficient learning of Stackelberg
+equilibria in general-sum games. Advances in Neural Information Processing Systems, 34:
+25799–25811, 2021.
+Vincent Conitzer and Tuomas Sandholm. Computing the optimal strategy to commit to. In
+Proceedings of the 7th ACM Conference on Electronic Commerce, pages 82–90, 2006.
+Jinshuo Dong, Aaron Roth, Zachary Schutzman, Bo Waggoner, and Zhiwei Steven Wu. Strategic
+classification from revealed preferences.
+In Proceedings of the 2018 ACM Conference on
+Economics and Computation, pages 55–70, 2018.
+Kefan Dong, Jiaqi Yang, and Tengyu Ma. Provable model-based nonlinear bandit and reinforce-
+ment learning: Shelve optimism, embrace virtual curvature. Advances in Neural Information
+Processing Systems, 34:26168–26182, 2021.
+Jacques Ferber and Gerhard Weiss. Multi-agent systems: an introduction to distributed artificial
+intelligence, volume 1. Addison-wesley Reading, 1999.
+Jerzy Filar and Koos Vrieze. Competitive Markov decision processes. Springer Science & Busi-
+ness Media, 2012.
+Dylan J Foster, Sham M Kakade, Jian Qian, and Alexander Rakhlin. The statistical complexity
+of interactive decision making. arXiv preprint arXiv:2112.13487, 2021.
+Matthias Gerstgrasser and David C Parkes. Oracles & followers: Stackelberg equilibria in deep
+multi-agent reinforcement learning. arXiv preprint arXiv:2210.11942, 2022.
+Michael A Goodrich, Alan C Schultz, et al. Human–robot interaction: a survey. Foundations
+and Trends® in Human–Computer Interaction, 1(3):203–275, 2008.
+Moritz Hardt, Nimrod Megiddo, Christos Papadimitriou, and Mary Wootters. Strategic classi-
+fication. In Proceedings of the 2016 ACM conference on Innovations in Theoretical Computer
+Science, pages 111–122, 2016.
+Chien-Ju Ho, Aleksandrs Slivkins, and Jennifer Wortman Vaughan. Adaptive contract design
+for crowdsourcing markets: Bandit algorithms for repeated principal-agent problems.
+In
+Proceedings of the fifteenth ACM conference on Economics and computation, pages 359–376,
+2014.
+Baihe Huang, Kaixuan Huang, Sham Kakade, Jason D Lee, Qi Lei, Runzhe Wang, and Jiaqi
+Yang. Optimal gradient-based algorithms for non-concave bandit optimization. Advances in
+Neural Information Processing Systems, 34:29101–29115, 2021.
+Hsu Kao, Chen-Yu Wei, and Vijay Subramanian.
+Decentralized cooperative reinforcement
+learning with hierarchical information structure. In International Conference on Algorithmic
+Learning Theory, pages 573–605. PMLR, 2022.
+Robert Kleinberg and Tom Leighton.
+The value of knowing a demand curve: Bounds on
+regret for online posted-price auctions. In 44th Annual IEEE Symposium on Foundations of
+Computer Science, 2003. Proceedings., pages 594–605. IEEE, 2003.
+13
+
+Jens Kober, J Andrew Bagnell, and Jan Peters. Reinforcement learning in robotics: A survey.
+The International Journal of Robotics Research, 32(11):1238–1274, 2013.
+John Langford and Tong Zhang. The epoch-greedy algorithm for contextual multi-armed ban-
+dits. Advances in neural information processing systems, 20(1):96–1, 2007.
+Niklas Lauffer, Mahsa Ghasemi, Abolfazl Hashemi, Yagiz Savas, and Ufuk Topcu. No-regret
+learning in dynamic Stackelberg games. arXiv preprint arXiv:2202.04786, 2022.
+Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa,
+David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv
+preprint arXiv:1509.02971, 2015.
+Yang Liu and Yiling Chen. A bandit framework for strategic regression. Advances in Neural
+Information Processing Systems, 29, 2016.
+Janusz Marecki, Gerry Tesauro, and Richard Segal. Playing repeated Stackelberg games with
+unknown opponents. In Proceedings of the 11th International Conference on Autonomous
+Agents and Multiagent Systems-Volume 2, pages 821–828, 2012.
+Nived Rajaraman, Yanjun Han, Lin Yang, Jingbo Liu, Jiantao Jiao, and Kannan Ramchandran.
+On the value of interaction and function approximation in imitation learning. Advances in
+Neural Information Processing Systems, 34:1325–1336, 2021.
+Daniel Russo and Benjamin Van Roy. Eluder dimension and the sample complexity of optimistic
+exploration. Advances in Neural Information Processing Systems, 26, 2013.
+Ahmad EL Sallab, Mohammed Abdou, Etienne Perot, and Senthil Yogamani. Deep reinforce-
+ment learning framework for autonomous driving. Electronic Imaging, 2017(19):70–76, 2017.
+Shai Shalev-Shwartz, Shaked Shammah, and Amnon Shashua. Safe, multi-agent, reinforcement
+learning for autonomous driving. arXiv preprint arXiv:1610.03295, 2016.
+Milind Tambe. Security and Game Theory: Algorithms, Deployed Systems, Lessons Learned.
+Cambridge University Press, 2011.
+Heinrich von Stackelberg.
+Market Structure and Equilibrium.
+Springer Science & Business
+Media, 2010.
+Martin J Wainwright. High-Dimensional Statistics: A Non-Asymptotic Viewpoint, volume 48.
+Cambridge University Press, 2019.
+Chih-Chun Wang, Sanjeev R Kulkarni, and H Vincent Poor. Bandit problems with side obser-
+vations. IEEE Transactions on Automatic Control, 50(3):338–355, 2005.
+Michael Wooldridge. An introduction to multiagent systems. John wiley & sons, 2009.
+Annie Xie, Dylan Losey, Ryan Tolsma, Chelsea Finn, and Dorsa Sadigh.
+Learning latent
+representations to influence multi-agent interaction. In Conference on robot learning, pages
+575–588. PMLR, 2021.
+Boling Yang, Liyuan Zheng, Lillian J Ratliff, Byron Boots, and Joshua R Smith. Stackelberg
+maddpg: Learning emergent behaviors via information asymmetry in competitive games.
+2022.
+Yaolong Yu, Haifeng Xu, and Haipeng Chen. Learning correlated Stackelberg equilibrium in
+general-sum multi-leader-single-follower games. arXiv preprint arXiv:2210.12470, 2022.
+14
+
+Kaiqing Zhang, Zhuoran Yang, and Tamer Ba¸sar. Multi-agent reinforcement learning: A selec-
+tive overview of theories and algorithms. Handbook of Reinforcement Learning and Control,
+pages 321–384, 2021.
+Han Zhong, Zhuoran Yang, Zhaoran Wang, and Michael I Jordan. Can reinforcement learning
+find Stackelberg-Nash equilibria in general-sum Markov games with myopic followers? arXiv
+preprint arXiv:2112.13521, 2021.
+Banghua Zhu, Stephen Bates, Zhuoran Yang, Yixin Wang, Jiantao Jiao, and Michael I Jordan.
+The sample complexity of online contract design. arXiv preprint arXiv:2211.05732, 2022.
+15
+
+A
+Proofs in Section 3
+A.1
+Proof of Theorem 3.1
+Proof. Consider Example 3.2. The expected reward is given by
+hθ(a, b) := θ · φ(a, b) = (1 − b)θ−d · a + b(1 − ∆),
+(27)
+Optimizing over b ∈ [0, 1] yields
+hθ(a) = max{1 − ∆, θ−d · a}.
+(28)
+Note that for any a ∈ A such that θ−d · a < 1 − ∆, the best response of the follower is b = 1,
+yielding an expected reward of 1−∆; for any a ∈ A such that θ−d ·a ≥ 1−∆, the best response
+of the follower is b = 0, yielding an expected reward of θ−d · a. The optimal joint response
+a = θ−d and b = 0 achieves the optimal expected reward of ∥θ−d∥ = 1 > 1 − ∆. From the
+leader’s perspective, this now reduces to the problem of a ReLU bandit considered in Dong et al.
+[2021], since the response provides no information until the average regret falls below ∆. Thus
+we have
+inf
+ˆπ sup
+θ∈Θ
+R(T) ≥ Ω(T 1−
+1
+d−2 ).
+A.2
+Proof of Theorem 3.3
+Proof. Let H(ǫ) be a minimal ǫ-covering of H under the metric ∥ · ∥∞. Let
+A(ǫ) =
+�
+arg max
+a∈A
+max
+b∈B h(a, b) | h ∈ H(ǫ)
+�
+,
+where we break ties arbirarily when the optimal action is non-unique.
+Note that we have
+|A(ǫ)| ≤ |H(ǫ)| ≤ N(ǫ). Let h⋆ be the true reward function. By the definition of a covering,
+there exists some hǫ ∈ H(ǫ) such that ∥h⋆ − hǫ∥∞ ≤ ǫ. Thus we have
+R(T) =
+T
+�
+t=1
+E[h
+⋆(a∗) − h
+⋆(at)] ≤ ǫT +
+T
+�
+t=1
+E[h
+⋆
+ǫ(a∗) − h
+⋆
+ǫ(at)].
+We know that the optimal action for hǫ must be inside the set A(ǫ).
+Thus any worst-case
+optimal no-regret algorithm on the set A(ǫ) gives a regret of
+�
+|A(ǫ)|T ≤
+�
+N(ǫ)T . This gives
+that
+R(T) ≤ ǫT +
+�
+N(ǫ)T .
+Taking infimum over ǫ finishes the proof.
+B
+Proofs in Section 4
+B.1
+Proof of Lemma 4.5
+Proof. Recall the notation from Example 4.4: let θ(b)
+t
+= ΠA(ˆθt) for t ≥ 2, with ˆθt :=
+1
+t−1
+�t−1
+i=1 ˆbi.
+The first round incurs at most a constant regret and can be ignored. It suffices to show that,
+with probability at least 1 − δ,
+∥θ − θ(b)
+t ∥ ≤ αt
+√
+t
+(29)
+16
+
+for αt = Θ
+�
+σb
+�
+d + log T
+δ
+�
+.
+First, we bound the distance between ˆθt and θ. By our assumption,
+∥ˆθt − θ∥ =
+���
+1
+t − 1
+t−1
+�
+i=1
+wi
+���,
+where w1, . . . , wt are i.i.d. zero-mean σb-sub-Gaussian. We proceed using a covering argument.
+Construct U ⊆ Sd−1 such that
+inf
+v∈Sd−1 sup
+u∈U
+u · v ≥ 1
+2.
+(30)
+Note that ∥u − v∥ = √2 − 2u · v for u, v ∈ Sd−1. Hence, equivalently, we may choose U as a
+minimal 1-covering of Sd−1 in Euclidean metric. Then
+log |U| ≤ log N int(Sd−1, 1, ∥ · ∥) ≤ log M(Bd, 1, ∥ · ∥) = Θ(d),
+(31)
+where N int and M denote the internal covering number and the packing number of the space
+under a given metric. The choice of U ensures that
+∥w∥ ≤ 2 sup
+u∈U
+u · w
+(32)
+for all w ∈ Rd, and ignoring the constant factor, we may focus on upper bounding supu∈U
+�t−1
+i=1 u·
+wi.
+For each choice of u ∈ U, let Zu,i = u · wi, so that Zu,1, . . . , Zu,t−1 are i.i.d. zero-mean
+σb-sub-Gaussian by definition of sub-Gaussian random vectors. By Hoeffding’s inequality for
+sub-Gaussian random variables, we have
+P
+�
+t
+�
+i=1
+Zu,i > x
+�
+≤ exp
+�
+− x2
+2tσ2
+b
+�
+(33)
+for all x > 0. Applying union bound over U and using (32) gives
+P
+�����
+t
+�
+i=1
+wi
+���� ≥ 2x
+�
+≤ P
+�
+sup
+u∈U
+t
+�
+i=1
+Zu,i ≥ x
+�
+≤ |U| exp
+�
+− x2
+2tσ2
+b
+�
+.
+(34)
+Choosing x = σb
+�
+2t log(|U|T) ≲ σb
+�
+t(d + log T
+δ ) ensures that, by another union bound over
+t ∈ [T],
+∥ˆθt − θ∥ ≲ σb
+�
+t−1�
+d + log T
+δ
+�
+(35)
+with probability at least 1 − δ. By the triangle inequality and the definition of projection,
+∥θ(b)
+t
+− θ∥ ≤ ∥θ(b)
+t
+− ˆθt∥ + ∥ˆθt − θ∥ ≤ 2∥ˆθt − θ∥ ≲ σb
+�
+t−1�
+d + log T
+δ
+�
+(36)
+with the same probability. This gives (29) and completes the proof.
+B.2
+Proof of Proposition 4.6
+Proof. We will condition upon the validity of the confidence sets, which happens with probability
+at least 1 − δ per our choice of {αt}t∈[T].
+17
+
+UCB always chooses at in the confidence set Θt, with radius of order O
+�
+σb
+�
+t−1(d + log T
+δ )
+�
+.
+When θ⋆ ∈ Θt, we have ∥at − θ⋆∥ ≲ σb
+�
+t−1(d + log T
+δ ). Since both at and θ⋆ are unit vectors,
+we have
+RUCB(T) ≤ 2δT +
+T
+�
+t=1
+�
+1 − θ⋆ · at
+�
+= 2δT + 2 + 1
+2
+T
+�
+t=1
+∥θ⋆ − at∥2
+≲ 2δT +
+T
+�
+t=2
+σ2
+b
+t
+�
+d + log T
+δ
+�
+= O
+�
+δT + σ2
+b log T ·
+�
+d + log T
+δ
+��
+,
+where the term 2δT bounds the contribution of the event that the confidence sets fails to be all
+valid. Choosing δ = 1/T gives our desired bound.
+B.3
+Proof of Proposition 4.10
+Proof. After the first round, the leader’s task reduces to a linear bandit with action space A1:
+only actions within A1 will be played, and the reward is linear in this region. As is well known
+for linear bandit (e.g., Russo and Van Roy [2013]), with probability 1 − δ, the regret in this
+linear stage (i.e., excluding the first round) is upper bounded by
+2δT + O
+��
+d log T · (d log T + log δ−1) · T
+�
+.
+The first round adds at most a constant to this and can be ignored. By choosing δ = T −1, we
+have
+RUCB(T) ≤ �O(d
+√
+T).
+(37)
+B.4
+Proof of Proposition 4.11
+Proof. Let Θ1 = {θa ∈ Sd−1|θa · b1 ≥ ζ} × {b1}, and denote the true parameter by θ⋆ = (θ⋆
+a, θ⋆
+b).
+By our assumption on the problem structure, we have θ⋆
+a ∈ Θ(b).
+As in the proof of Theorem 3.3, let Θ(ǫ) be a minimal ǫ-covering of Θ1 in Euclidean metric,
+with ǫ > 0 to be specified later. In particular, there is some ˜θa ∈ Θ1 with ∥˜θa − θ⋆
+a∥ ≤ ǫ. Let
+A(ǫ) = {arg maxa∈A ReLU(θa · a − ∆) | θa ∈ Θ(ǫ)}, where we break tie arbitrarily when the
+optimal action is non-unique. Note that |A(ǫ)| ≤ |Θ(ǫ)| = N(Θ1, ǫ, ∥ · ∥).
+Now, let the leader play UCB on the discrete action set A(ǫ) after the first round. The
+regret satisfies
+R(T) ≤ 1 +
+T
+�
+t=2
+E
+�
+h
+⋆(a∗) − h
+⋆(at)
+�
+≤ 1 + T · E
+�
+h
+⋆(a∗) − h
+⋆(˜a∗)
+�
++
+T
+�
+t=1
+E
+�
+h
+⋆(˜a∗) − h
+⋆(at)
+�
+, (38)
+where a∗ = θ⋆
+a and ˜a∗ ∈ arg maxa∈A(ǫ) h
+⋆(a). Note that h
+⋆(˜a∗) ≥ h
+⋆(˜θa) ≥ h
+⋆(a∗) − ǫ by our
+choice of ˜θa and A(ǫ), the second term in (38) is at most ǫT. The third term, the regret of UCB
+on A(ǫ), is bounded by O(
+�
+N(Θ1, ǫ, ∥ · ∥) · T) in expectation.
+It remains to bound N(Θ1, ǫ, ∥ · ∥). Note that for any θa, θ′
+a ∈ Θ1, we have
+θa · θ′
+a = (θa · b1)(θ′
+a · b1) + (θa − (θa · b1)b1) · (θ′
+a − (θ′
+a · b1)b1)
+≥ ζ2 − ∥θa − (θa · b1)b1∥∥θ′
+a − (θ′
+a · b1)b1∥
+≥ ζ2 − (1 − ζ2) = 2ζ2 − 1.
+Equivalently, ∥θa − θ′
+a∥ =
+�
+2 − 2θa · θ′a ≤ 2
+�
+1 − ζ2 = 2Cζ. Thus, the covering number of
+Θ1 is upper bounded by
+� KCζ
+ǫ
+d) for some absolute constant K, which yields a regret bound
+18
+
+of 1 + ǫT + O(
+�
+KdCd
+ζ T/ǫd).
+Choosing ǫ ≍ (KCζ)
+d
+d+2T −
+1
+d+2 reduces this upper bound to
+O
+�
+C
+d
+d+2
+ζ
+T
+d+1
+d+2
+�
+as desired.
+C
+Proofs in Section 5
+C.1
+Proof of Proposition 5.2
+Proof. Let the leader run the phased elimination algorithm Huang et al. [2021, Algorithm 6]
+using the response b∗
+θ(at) as the proxy reward to maximize. This proxy reward, in expectation,
+is a homogeneous polynomial of degree 2k − 1. By Corollary 3.16 in Huang et al. [2021], the
+algorithm achieves
+�R(T) ≤ �O
+�√
+d2k−1T
+�
+,
+(39)
+where �R(T) = �T
+t=1 1 − b∗
+θ(at) is the proxy regret measured based on the the proxy reward
+(i.e., absolute response). Note that the reward is maximized exactly when the proxy reward is
+maximized. Thus, the Lipschitz property (19) suggests that
+R(T) ≤
+2k
+2k − 1
+�R(T) ≤ �O(
+√
+d2k−1T).
+(40)
+19
+
diff --git a/idFJT4oBgHgl3EQfWizT/content/tmp_files/load_file.txt b/idFJT4oBgHgl3EQfWizT/content/tmp_files/load_file.txt
new file mode 100644
index 0000000000000000000000000000000000000000..0fa2656012e9296a26a21cc7963eac6364fb2cea
--- /dev/null
+++ b/idFJT4oBgHgl3EQfWizT/content/tmp_files/load_file.txt
@@ -0,0 +1,686 @@
+filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf,len=685
+page_content='arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='11518v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='LG] 27 Jan 2023 Online Learning in Stackelberg Games with an Omniscient Follower Geng Zhao ∗ † Banghua Zhu ∗ † Jiantao Jiao ∗ Michael I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Jordan ∗ January 30, 2023 Abstract We study the problem of online learning in a two-player decentralized cooperative Stack- elberg game.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' In each round, the leader first takes an action, followed by the follower who takes their action after observing the leader’s move.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' The goal of the leader is to learn to minimize the cumulative regret based on the history of interactions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Differing from the tradi- tional formulation of repeated Stackelberg games, we assume the follower is omniscient, with full knowledge of the true reward, and that they always best-respond to the leader’s actions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' We analyze the sample complexity of regret minimization in this repeated Stackelberg game.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' We show that depending on the reward structure, the existence of the omniscient follower may change the sample complexity drastically, from constant to exponential, even for linear cooperative Stackelberg games.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' This poses unique challenges for the learning process of the leader and the subsequent regret analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' 1 Introduction The multi-agent learning problem [Ferber and Weiss, 1999, Wooldridge, 2009, Filar and Vrieze, 2012, Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=', 2021] has received significant attention reflecting its wide variety of real-world applications, including autonomous driving Shalev-Shwartz et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' [2016], Sallab et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' [2017] and human-robot interaction Kober et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' [2013], Lillicrap et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' [2015], Goodrich et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' [2008], Xie et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' [2021].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' In a multi-agent system, it is natural to assume that each agent possesses a dif- ferent set of information due to its different viewpoint and history of actions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' This phenomenon is commonly referred to as the property of information asymmetry Yang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' [2022].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Such information asymmetry poses challenges to the coordination and cooperation between learning agents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' In this paper, we study how the information asymmetry affects the sample complexity of learning a two-player decentralized cooperative repeated Stackelberg game, with a focus on the setting when the follower is omniscient and myopic, and always best-responds to the leader’s actions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Consider an illustrative example in human-robot interaction where a robot is required to collaborate with a human to achieve some shared objective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' This can be formulated as a repeated Stackelberg game where the interactions between human and robot happen in multiple rounds, and the human is an omniscient expert who knows the exact target and how to achieve it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' In each round, the robot, as the leader who hopes to learn the world model and human behavior from scratch, first takes some action.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' After seeing the robot’s action, the human, as an expert follower who possesses perfect information about the world, always best-responds to the robot’s action to maximize their reward.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' The robot hopes to use as few as possible interactions to learn the world model and human behavior, and eventually find the optimal action that maximizes a shared reward.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' ∗University of California, Berkeley.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Email: {gengzhao,banghua}@berkeley.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='edu,jiantao@eecs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='berkeley.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='edu, jordan@cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='berkeley.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='edu †The two authors contributed equally to this work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' 1 Concretely, during each round t of the interaction, the leader first plays an action at ∈ A, and the follower plays another action bt ∈ B upon (perfectly) observing at.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' We assume that the two players share a reward, rt = h⋆(at, bt) + zt, where zt ∈ R is some zero-mean sub- Gaussian noise, h⋆ belongs to a family H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' We also assume that the follower has full knowledge of the reward and always best responds with bt ∈ arg maxb∈B h⋆(at, b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' However, the leader does not know h⋆ and can only explore via taking actions at and making inferences from past observations (a1, b1, r1), · · · , (at−1, bt−1, rt−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='1 We are interested in providing tight bound for the Stackelberg regret, defined as R(T) = max a∈A E � T � t=1 � max b∈B h⋆(a, b) − max bt∈B h⋆(at, bt) �� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' The Stackelberg regret characterizes the gap between the reward achieved from the optimal leader action and the reward from the actual leader action at.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Compared with the traditional bandit problem, the extra observation of bt can be viewed as side information accompanying the usual action-reward pair.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Depending on how the function family H and side information b are designed, the complexity of learning for the leader may vary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Here we briefly summarize several illustrative examples where the follower may help or harm the leader’s learning process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' We will present a general formalization that encompasses these examples in the next section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Curse of expertise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Imagine that in a driving system, the self-driving vehicle (leader) and the human driver (follower) work together to avoid collisions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' For most of the aggres- sive actions the leader takes, the final reward for non-collision is high since the human driver will consistently exert efforts to evade the self-driving vehicle in order to prevent collisions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' From the leader’s point of view, aggressive actions lead to similar outcomes as safe actions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' The expertise of the human prevents the leader from learning from failure cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Imitation Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Consider an assembly robot (leader) that learns to move goods to a destination with a human expert (follower).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' This can be modeled by the robot choosing a drop-off location, from which the human expert continues to the correct destination.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' In this simple example, the robot and the human expert cooperate in a “linear” fashion— the expert can complete whatever the robot leaves undone, and upon observation of the expert’s move the robot should simply imitate the behavior of the human expert in the future.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' This corresponds to an “imitation-based” interaction that can greatly accelerate the learning process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Expert-guided learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' In most cases, the self-driving vehicle may have some target that is similar but not exactly the same as the human driver.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' For example, they both aim to avoid collision while heading to a different target.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' In this case, a pure imitation- based learning will fail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' But the self-driving vehicle can still glean good driving standards from the human driver.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' With the extra observation of the behavior of human driver, the self-driving vehicle can learn much faster.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' In this paper, we abstract and formalize these three scenarios into a simple linear Stackelberg game and analyze the the sample complexity of this game.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' We briefly overview our main results in the next section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' 1For simplicity we assume in the introduction that the leader can see b1, · · · , bt−1 without noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Later we generalize to the case when the observed bt is also noisy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' 2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='1 Main results Contrary to the traditional literature on linear bandits, we show that the worst-case sample complexity for achieving ǫ-Stackelberg regret is at least exponential even when h⋆ belongs to the linear family Hφ = {θ · φ(a, b)}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' The hard instance corresponds to the ‘curse of expertise’ example discussed above, where the follower’s best response hurts the observation, and thus harms the whole learning process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='1 (Curse of expertise, informal).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' There exists some φ such that for any algorithm, we can find some h⋆ ∈ Hφ with the regret being Ω(T (d−3)/(d−2)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' This shows that the leader needs an exponential number of samples to learn a good policy even when the reward is linear.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' We also present an upper bound O(T (d+1)/(d+2)) for linear rewards in Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' On the other hand, the side information bt can also greatly improve the sample complexity when the linear family is structured.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' We provide an Upper Confidence Bound (UCB) based algorithm [Auer et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=', 2002] that leads to an improved bound in this setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' In particular, we recover the rate for imitation learning when the leader can simply mimic the behavior of the follower.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='2 (Imitation learning, informal).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' There exists some φ such that for any h⋆ ∈ Hφ, when bt is observed, the leader can achieve regret O(log2(T)) by imitating the follower behavior.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' However, when bt is not observed, the regret is Θ( √ T).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Similarly, we can also design cases where observing bt helps reduce the problem to a tradi- tional linear bandit, while not observing bt suffers from exponential sample complexity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='3 (Expert-guided, informal).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' There exists some φ such that for any h⋆ ∈ Hφ, when bt is observed, the leader can achieve regret O( √ T).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' However, when bt is not observed, the regret is Ω(T (d−3)/(d−2)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' In addition to these three examples, we discuss more complicated scenarios where UCB fails and we show that a careful analysis is necessary to achieve a near-optimal rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' In particular, we establish such a rate for polynomial bandits, where the best-response corresponds to a lower degree polynomial, which helps improve the rate when the noise level for reward and the observed follower behavior is similar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='4 (Polynomial bandit, informal).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' There exists a family of 2k-degree polynomial, such that the regret is Θ( √ d2k−1T) when bt is observed, and Θ( √ d2kT) when bt is not observed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='2 Related work Decentralized Stackelberg Games.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' The problem of repeated Stackelberg games has been studied extensively [von Stackelberg, 2010, Marecki et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=', 2012, Lauffer et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=', 2022, Kao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=', 2022], in a standard setting where the leader leads and the myopic follower follows with its best response for the current round.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Kao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' [2022] and Lauffer et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' [2022] study a similar setting to ours, in which a leader and a follower interact through a cooperative Stackelberg game that comprises a two-stage bandit problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' However, Kao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' [2022] restrict their focus to the tabular case where both A and B are finite and the reward h⋆ is uncorrelated for different actions (a, b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' They also assume that both the leader and the agent are running regret-minimization algorithms independently.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' They show that the classic upper confidence bound (UCB) algorithm for the multi-arm bandit problem can be used for both the leader and the agent, respectively, to achieve asymptotically optimal performance (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=', no-regret).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' However, it is unclear that such results can generalize to bandits with function approximation and the case of omniscient agents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Indeed, our results show 3 that the general case (or even just the linear case) is not always statistically tractable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Note also that Lauffer et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' [2022] show that the regret can depend exponentially on the dimension of the agent’s utility.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Other examples of Stackelberg games include Stackelberg security games [Conitzer and Sandholm, 2006, Tambe, 2011], strategic learning Hardt et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' [2016], Dong et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' [2018], Liu and Chen [2016], dynamic task pricing [Kleinberg and Leighton, 2003] and online contract design [Ho et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=', 2014, Zhu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=', 2022].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' The problem of online learning in contract theory considers a decentral- ized general-sum Stackelberg game with omniscient agents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' It focuses on a special case where the rewards for the leader and the agent are both linear.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' It is shown in Zhu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' [2022] that one has to pay exponential sample complexity in this setting to achieve small regret in the worst case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Centralized Stackelberg Game.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Centralized Stackelberg games are also well studied in the literature [Zhong et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=', 2021, Bai et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=', 2021, Gerstgrasser and Parkes, 2022, Yu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=', 2022], where the machine learning algorithm has control over both the leader and the fol- lower.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Bai et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' [2021] consider the repeated Stackelberg game where both the leader and the agent learn their optimal actions (a Stackelberg equilibrium) from samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' However, they assume a central controller that can determine the actions of both the leader and the agent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Moreover, they rely on an assumption of a bounded gap between the optimal response and an ǫ-approximate best response.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' In contrast, in our framework, we assume that the agent’s utility is unknown, and that the agent always takes the best response.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Bandit with side information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' There has been significant effort in studying bandits with side information [Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=', 2005, Langford and Zhang, 2007, Foster et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=', 2021].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Such side information is generally assumed to be available before a decision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Foster et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' [2021] also consider the case when an extra observation is available after taking the actions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' However, they mainly focus on the setting of reinforcement learning where the extra observation is the trajectory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Although our observation of follower behavior can also be viewed as side information, it also alters the reward in the Stackelberg game, which changes the structure of the multi-agent problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' 2 Formulation We consider a two-player cooperative Stackelberg bandit game with an omniscient follower.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Let A ⊆ Rd1 and B ⊆ Rd2 be compact sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Up to a scaling factor, we will assume that A and B reside inside the unit ball centered at the origin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' During each round t ∈ [T] of interaction, the leader plays an action at ∈ A, and the follower plays bt ∈ B upon (perfectly) observing at.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' The two players both receive a reward rt = h⋆(at, bt) + zt, where zt ∈ R is zero-mean σr-sub- Gaussian and is independent of all past events.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' We will make the realizability assumption that h⋆ belongs to a (known) family H of real-valued functions on Bd1 × Bd2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' As is common in the study of bandits, we assume that reward function is bounded, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=', there exists C ∈ (0, ∞) such that 0 ≤ h ≤ C for all h ∈ H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' We assume C = 1 throughout the paper unless stated otherwise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' We will assume that the follower, modeled after an expert human player, has full knowledge of the game and can always best respond with an optimal action bt ∈ arg maxb∈B h⋆(at, b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' The leader then makes a noisy observation of bt, given by ˆbt = bt+wt, where wt ∈ Rd2 is zero-mean σb- sub-Gaussian (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=', component-wise σb-sub-Gaussian with independent zero-mean coordinates) and independent of all past events.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' For convenience, we denote the set of best responses to leader’s action a when the ground truth reward function is h by b∗ h(a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Denote h(a) := maxb∈B h(a, b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' The optimal action, unbeknownst to the leader, is denoted a∗ := arg maxa∈A h ⋆(a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' 4 The leader’s objective is to minimize the regret during T rounds of interactions, defined as R(T) = max a∈A E � T � t=1 h(a) − h(at) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' (1) We will also focus on the sample complexity of achieving low (average) regret;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' that is, for some ǫ, δ ∈ [0, 1], the minimal T ∈ N such that R(T) ≤ ǫT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Notations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' We use calligraphic letters for sets and operators, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=', A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Given a set A, we write |A| for the cardinality of A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Bd and Sd−1 denote the unit ball and the unit sphere, both centered at the origin, in d-dimensional Euclidean space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Vectors are assumed to be column vectors except for the probability and measure vectors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' For a vector v ∈ Rd and an integer i ∈ N, we use vi to denote the i-th element of v, and v−i to denote the vector of all elements in v except for vi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' For two n-dimensional vectors x and y, we use x · y = x⊤y to denote their inner product.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' We write f(x) = O(g(x)) or f(x) ≲ g(x) if there exists some positive real number M and some x0 such that |f(x)| ≤ Mg(x) for all x ≥ x0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' We use �O(·) to be the big-O notation ignoring logarithmic factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' We write f(x) = Ω(g(x)) or f(x) ≳ g(x) if there exists some positive real number M and some x0 such that |f(x)| ≥ Mg(x) for all x ≥ x0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' We write f(x) = Θ(g(x)) if we have both f(x) = O(g(x)) and f(x) = Ω(g(x)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' We use ∥ · ∥p to denote the ℓp norm for p ∈ (0, ∞], with ∥ · ∥ denoting the Euclidean (ℓ2) norm ∥ · ∥2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Parameterized family.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' In subsequent discussions, we will consider the parameterized case when H admits a parameterization over a compact parameter space Θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' The class is denoted by HΘ = {hθ|θ ∈ Θ}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' When the parameterization is linear, that is, hθ(a, b) = θ · φ(a, b) (2) for some feature function φ : A × B → Bd, we will denote the class by HΘ,φ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' We denote the true parameter by θ⋆.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' For instance, when A and B are the sets of standard basis vectors in R|A| and R|B| with φ(a, b) = ab⊤ and θ is bounded in R|A|×|B|, we recover the tabular case model in Kao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' [2022] with finite action sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' In general, however, we will focus on cases with infinite action sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' 3 Linear Stackelberg games: Curse of expertise In this section, we study the sample complexity of learning in linear Stackelberg game, where the family of reward is restricted to HΘ,φ for some given Θ and φ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='1 An exponential lower bound.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' It is well known that the regret for traditional linear bandits grows as Θ(d √ T) [Abbasi-Yadkori et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=', 2011].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' In the case of a linear Stackelberg game, we present a worst-case lower bound on the regret that is exponential in dimensionality for the linear family.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' This suggests that the leader cannot learn the task well unless in possession of an exponential number of samples even when we restrict to linear Stackelberg games.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Assume the leader makes perfect observations of the follower’s responses (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=', σb = 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' We have the following lower bound.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' For any d ≥ 3, there exists some φ such that, for any algorithm that the leader runs, one can find some instance with hθ ∈ HΘ,φ such that R(T) ≳ T (d−3)/(d−2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' (3) 5 In other words, the sample complexity for achieving ǫ (average) regret is at least Ω � (1/ǫ)d−2� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' The proof is detailed in Appendix A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' The worst-case instance presented below can be reduced to the ReLU bandit problem shown below, which is known to suffer from the exponential sample complexity Dong et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' [2021].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Example 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Let A = Bd−1, B = [0, 1] and Θ = {θ | θ−d ∈ Sd−2, θd = 1 − ∆} for some ∆ ∈ (0, 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Let the feature function be φ(a, b) = ((1 − b)a, b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' One can verify that in this case, one has hθ(a) = max{1 − ∆, θ−d · a}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' (4) Thus when a is chosen far from θ−d, the reward will remain constant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='1 is no mystery mathematically: the best response may destroy linearity for the leader’s observations, imposing a toll.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Conceptually, however, the message from the theorem is striking: it highlights a “curse of expertise”;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=', the potential difficulty to learn with an expert on a decentralized bandit learning task with a large action space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' From the classic single-agent bandit learning perspective, the task the two agents aim to solve is straightforward: a linear bandit on an action space φ(A, B).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' In other words, if the expert follower lets the novice leader control the choice of b, the average regret would steadily decrease at a rate of � O(d √ T).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' On the other hand, with a myopic focus, the follower’s expertise in best responding ironically results in a significantly higher regret, as it deprives the learner of the ability to explore.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' In the context of autonomous driving, for example, this can manifest in scenarios where the autonomous vehicle takes a poor action (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=', an aggressive lane change) yet other vehicles or pedestrian immediately respond by slowing down or steering away to avoid a possible collision, thereby hiding the potential negative consequences of the action.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' The lack of coordination and the constant best response from the follower, both common in practice, makes it hard for the leader to efficiently learn the reward landscape or improve their current policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='2 An exponential upper bound.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' For any a class H of reward functions on a pair of actions (a, b), an upper bound on the sample complexity (and regret) can be obtained using a covering argument.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Let N(ǫ) = N(H, ǫ, ∥ · ∥∞) denote the ℓ∞ covering number of H with radius ǫ > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Then we can achieve R(T) ≲ inf ǫ>0 ǫT + � N(ǫ)T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' (5) To achieve this, simply compute an ǫ-covering of H and let the leader play no-regret algo- rithms on the ǫ-covering set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Note that although the covering is constructed for pair of actions (a, b) ∈ Aǫ×Bǫ, it suffices for the leader to run no-regret algorithms on actions Aǫ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' The detailed algorithm and proof are given in Appendix A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' This upper bound is achieved when the leader does not even utilize the observations of the follower’s responses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Indeed, in the worst case (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=', in Example 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='2), the responses will not provide information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' As a corollary, in the linear regime with HΘ,φ, the covering number is N(ǫ) = N(Θ, ǫ, ∥·∥) ≤ exp � O � d log 1 ǫ �� Wainwright [2019].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Choosing ǫ ≍ T −1/(d+2), Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='3 reduces to the following upper bound in the linearly parameterized case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Corollary 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' In the linear case, we can achieve R(T) ≲ T (d+1)/(d+2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' In other words, the sample complexity for achieving average regret equal to ǫ is upper bounded by � O � 1/ǫ �d+2� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' This upper bound is agnostic to any structural property of the feature function φ, such as smoothness or even continuity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' 6 4 UCB with side observations Although the worst-case sample complexity for linear Stackelberg games is exponential, it is possible to obtain a fine-grained analysis and improved rate for the family HΘ,φ when φ is better structured.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' A natural choice of algorithm for the leader is some variant of UCB that incorporates observations of the follower’s actions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' In this section, we will describe a general recipe for a family of UCB algorithms to incorporate the side information as well as the challenge in their design.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='1 Algorithm description We consider the following variant of UCB that uses the follower’s responses as side information to improve the confidence set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Algorithm 1 UCB with side information from expert Input: Regression oracles Reg(b) and Reg(r) on reward and response, {αt}t∈[T], {βt}t∈[T] for t = 1 to T do Compute h(b) t = Reg(b)(ˆb1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' ,ˆbt−1) and h(r) t = Reg(r)(r1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' , rt−1) Set H(b) t := {h : �t−1 i=1 ∥b∗ h(ai) − b∗ h(b) t (ai)∥2 ≤ α2 t } Set H(r) t := {h : �t−1 i=1 � h(ai) − h (r) t (ai) �2 ≤ β2 t } Construct confidence set Ht = H(b) t ∩ H(r) t Take action at ∈ arg maxa∈A suph∈Ht h(a) Observe (noisy) reward rt and response ˆbt end for Remark 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' The regression oracles and the sequences {αt}t∈[T], {βt}t∈[T] must be chosen ap- propriately so that the following condition holds: Given an error tolerance δ ∈ (0, 1), we require h⋆ ∈ �T t=1 Ht with probability at least 1 − δ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Remark 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' A common choice for Reg(b) and Reg(r) is the least-squares regression oracle that computes h(b) t ∈ arg min h∈H t−1 � i=1 ∥b∗ h(ai) − ˆbi∥2 (6) and h(r) t ∈ arg min h∈H t−1 � i=1 (h(ai) − ri)2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' (7) When the least-squares computation becomes infeasible under complex response-reward struc- tures (this is common for (6)), custom oracles need to be designed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' A more intricate approach may be to jointly construct the estimate using both {ˆbτ}τ∈[t−1] and {rτ}τ∈[t−1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' We leave it for future research to study systematic designs of the oracles and the confidence sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Remark 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' When the responses are unobserved or ignored (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=', by choosing αt = ∞), Algorithm 1 reduces to the classic Eluder UCB using the least-squares (reward) oracle with Ht = H(r) t Russo and Van Roy [2013].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' The choices of {αt}t∈N and {βt}t∈N can pose another challenge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' An naive attempt to get a generic upper bound on αt is to use a covering argument as in Russo and Van Roy [2013] using the following measurement between two functions h, h′ ∈ H: d(b)(h, h′) = supa ∥b∗ h(a) − b∗ h′(a)∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' But note that this does not necessarily define a norm, and further the covering number of H in this sense can be infinite when the best response is discontinuous in the leader’s action a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Thus, such an approach is often not useful and one may have to determine αt on a per instance basis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' 7 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='2 Examples While Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='1 shows that the involvement of the omniscient follower can lead to “curse of expertise,” a stark deterioration in the sample complexity, there are many scenarios where the leader’s observation of the follower’s responses can expedite learning significantly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' In this section, we will explore a few such examples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='1 An imitation-based example Let us consider a setting where the leader achieves efficient learning through imitation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Heuris- tically, imitation arises when the optimal action for the leader is equal to the best response for the omniscient follower or a function of it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' This may capture, for instance, real-world robotics applications where the actions of the robot and the human expert are exchangeable and the true goal can be easily inferred from the expert’s action.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' A simple scenario is when the robot and the human expert are supposed to carry out the same task perfectly, in which case the robot should simply treat the expert as a role model and imitate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' The following is a concrete example.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Example 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Let A = B = Θ = Sd−1 (or Bd equivalently)2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Consider the linearly parameter- ized function class HΘ,φ with feature function φ(a, b) = a + b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' (8) Here, the optimal response b∗ θ ≡ θ is independent of a, and hθ(a) = θ · a + 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Construction of confidence sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' The (noisy) observations of the follower’s best responses simplify the problem into an imitation learning task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' A simple oracle for the best-response obser- vations is to take the A-projected empirical average of responses, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=', θ(b) t = ΠA � 1 t−1 �t−1 i=1 ˆbi � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='3 The response-based confidence set reduces to Θ(b) t = � θ ∈ Θ ���∥θ − θ(b) t ∥ ≤ αt √t − 1 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Standard sub-Gaussian concentration results suggest that the (Euclidean) radius of this confi- dence set shrinks at a rate of t−1/2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' To ensure θ⋆ ∈ � t∈[T] Θt with probability at least 1 − δ, it suffices to choose αt = Θ � σb � d + log T δ � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' UCB chooses actions on Sd−1 increasingly close to the empirical estimate θ(b) t .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='4 The regret bound follows from these choices of confidence sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' In Example 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='4, UCB achieves a regret bound RUCB(T) ≲ σ2 b log T · (d + log T).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' (9) In other words, the average regret decays at a rate of �O(σ2 bd/T).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' This has also been analyzed in the setting of imitation learning [Rajaraman et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=', 2021], and the results are consistent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' 2While it is customary to consider Θ = Bd, we will observe below that the imitation-based algorithm does not crucially rely on ∥θ⋆∥ and only incurs smaller regret if ∥θ⋆∥ < 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' This is because the algorithm asymptotically relies solely on the response observations, which are invariant under scaling of θ⋆.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' It is also without loss of generality to restrict all actions to the sphere.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' 3Define the projection of y ∈ Rd onto a closed set X ⊆ Rd as ΠX(y) := arg minx∈X ∥y − x∥, breaking ties arbitrarily when the minimizer is not unique.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' 4Even simpler, the leader can play the A-projected empirical average of responses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Under our choice of constant α, the analysis will be the same, with the result differ by at most a constant factor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' 8 Remark 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' When the follower’s responses are unobserved, this is simply a linear bandit, where the minimax regret is Ω(σbd √ T) ≫ O(σ2 bd log2 T).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' This indicates the value of the bt observations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' When the follower’s response is noiseless, one can see that a single sample suffices to find the optimal response since one always observes b⋆ θ = θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Remark 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Note the gap in the Θ(log T) regret when the response observations are used and the Θ( √ T) regret when they are ignored or unavailable, showing the value of those response observa- tions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' In fact, it is easy to modify this example slightly (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=', taking φ(a, b) = max{|θ⊤a|, ∆}b for some ∆ ∈ (0, 1)) to create an even larger gap: When the leader uses the response observations, the regret is � O(d log T) with sample complexity � O � d log 1 ǫ � ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' When the response observations are unavailable, the sample complexity increases to Ω(ǫ−d).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='2 Expert-guided exploration In many scenarios, the omniscient follower’s actions may not directly reveal the exact state of the world but still provide crucial information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' The next example illustrates a simple setting where the follower’s response can significantly reduce the sample complexity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Example 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Let A = B = Sd−1 and Θ = {(θa, θb) ∈ Sd−1 × Sd−1|θa · θb ≥ ζ} for some ζ ∈ (0, 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Consider the parameterized family of functions HΘ = {hθ|θ ∈ Θ} where hθ(a, b) = ReLU(θa · a − ∆) + θb · b, for some ∆ ∈ (0, 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' For simplicity, we will assume that the response observations are noiseless (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=', σb = 0), although the noisy case can be analyzed analogously.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Confidences sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' The best response is b∗ θ ≡ θb, again independent of the leader’s action.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Upon observing b1 = θb, the leader should construct confidence sets Θ(b) t = {θa ∈ Sd−1|θa · b1 ≥ ζ} × {b1}, while Θ(r) t is chosen as in linear UCB.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' As a result, all subsequent actions the leader takes must fall into A1 := {a ∈ A|a · b1 ≥ ζ}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' (10) This refinement of the action set will reduce the sample complexity, and depending on the size of ζ relative to ∆, the reduction can be significant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Strong reduction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' When 1 − ζ ≤ (1 − ∆)/4, the leader learns that θa · b1 ≥ ζ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' In particular, any action a ∈ A1 must satisfy θa · a = 2 − ∥θa − a∥2 2 ≥ 2 − (∥θa − b1∥ + ∥a − b1∥)2 2 ≥ 2 − (2√2 − 2ζ)2 2 = 1 − 4(1 − ζ) ≥ ∆, (11) and thus h(a) = θa · a − ∆ + 1 behaves as a linear function within A1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' By playing UCB within A1, the leader reduces the problem to a linear bandit instance and thus achieves the following regret bound.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Assume 1 − ζ ≤ (1 − ∆)/4 in Example 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' UCB achieves RUCB(T) ≤ �O(d √ T).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' (12) This leads to a sample complexity of � O(d2/ǫ2), in contrast to the exponential sample com- plexity exp(O(d log 1 ǫ)) if the responses were unobserved.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Information from the follower’s re- sponse guides the leader’s exploration to the well conditioned part of the action space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Given the Ω(d √ T) sample complexity of linear bandits, the upper bound (12) is tight (up to logarithmic terms).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' 9 Weak reduction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' When ζ is small relative to ∆, the problem does not immediately reduce to a linear bandit, but we have the following improved upper bound.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' There exists an algorithm Alg that achieves RAlg(T) ≤ O � (Cd ζ T d+1) 1 d+2� , (13) where Cζ := � 1 − ζ2 ∈ (0, 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' This bound improves as ζ decreases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' The sample complexity is therefore �O(Cd ζ ǫ−d−2), a Cd ζ reduction compared with the original complexity without observing the responses in Corol- lary 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Since the reduced problem is still a ReLU bandit, UCB will not be suitable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Instead, (13) can be achieved through discretization of A1 as the upper bound in Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' 5 Beyond UCB Although the UCB algorithm gives a near-optimal rate in most of the above examples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' We also provide two cases where UCB fails to achieve the optimal rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' This necessitates a tailored algorithm design in specific settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='1 Nonlinear (polynomial) family UCB is known to fail to achieve the optimal rate in the case of the polynomial bandit family Huang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' [2021], where the reward is a polynomial activation on top of a linear family.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' We construct an example which utilizes the structure of the polynomial bandit, formally defined below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Example 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='1 (Polynomial bandit).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Consider the convex function f(x) = x2k for some k ∈ Z+.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Let A = Bd−1, B = [−1, 1], Θ = Bd−1 × {1}, (14) and φ(a, b) = (2kba, −f ∗(2kb)), (15) where f ∗ is the convex conjugate of f.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Consider the nonlinearly parameterized family HΘ := {hθ(a, b) = f(θ · φ(a, b)) | θ ∈ Θ}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' (16) By properties of the convex conjugate, hθ(a) = f(θ−d · a) = (θ−d · a)2k (17) with the best response b∗ θ(a) = arg max −1≤b≤1 2kbθ−d · a − f ∗(2kb) = f ′(θ−d · a) 2k = (θ−d · a)2k−1 ∈ [−1, 1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' This observation allows us to apply results on polynomial bandits Huang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' [2021].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' 10 Response-regret structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Observe the following properties of the best response function in Example 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' The expected reward is a function of the best response, independent of the true parameter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Namely, hθ(a) = b∗ θ(a) 2k 2k−1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' (18) This mapping is Lipschitz: ��hθ(a) − hθ(a′) �� ≤ 2k 2k − 1 ��b∗ θ(a) − b∗ θ(a′) ��, (19) and further arg max a∈A b∗ θ(a) = θ ∈ arg max a∈A hθ(a), (20) with both maxima being 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' The response observation, as a degree 2k − 1 polynomial, is more informative than the reward observation, a degree 2k polynomial, when the noise levels are the same and θ−d ·a is small.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Based on these two observations, the leader may view the response bt as a proxy reward and aim to minimize the proxy regret �R(T) := T � t=1 1 − b∗ θ(at).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' (21) This is consistent with minimizing the true regret R(T), which differs from the proxy regret �R(T) by at most a constant factor by (19).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Regret bound.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Using the response observations exclusively to minimize the proxy regret �R(T) = �T t=1 1 − b∗ θ(at), the leader reduces her task to a polynomial bandit problem with a degree 2k − 1 polynomial activation function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' By (19), we may focus on bounding the proxy regret.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Corollary 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='16 from Huang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' [2021] suggests that �R(T) ≤ �O( √ d2k−1T), (22) or equivalently the sample complexity is � O(d2k−1/ǫ2) for achieving ǫ average proxy regret.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' The following bound on the true regret follows from (19) and (22).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Proposition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' In example 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='1, there exists an algorithm Alg, using the response observations exclusively, that achieves RAlg(T) ≤ O( √ d2k−1T).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' (23) Proposition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='2 suggests an � O(d2k−1/ǫ2) sample complexity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' For instance, the leader can achieve this regret with the zeroth-order algorithm proposed in Huang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' [2021, Algorithm 6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Remark 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='3 (Lower bound).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Since the reward observations have a higher signal-to-noise-ratio, we should expect that the sample complexity of Example 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='1 to be the same order as the sample complexity of achieving ǫ average regret in a degree 2k − 1 polynomial bandit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Huang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' [2021] shows that this is lower bounded by Ω(d2k−1/ǫ2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Thus, (23) is essentially optimal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Remark 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='4 (Benefit of observing responses).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' If the leader does not observe the responses, the problem is equivalent to a degree 2k polynomial bandit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' The optimal regret without observing the experts actions will lead to an �O(d2k/ǫ2) sample complexity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Thus, the response observations contribute to shaving of a factor of d, which can be significant when the dimensionality is high.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Remark 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='5 (Suboptimality of UCB).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Using the traditional Eluder UCB algorithm leads to a suboptimal sample complexity of � O(d2k/ǫ2) when the leader solely uses the response observa- tions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Still, this is a factor d improvement compared to what she can achieve with UCB without the response observations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' 11 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='2 Failure of the optimism principle The next example is adapted from the ReLU bandit in Example 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='2, and shows that optimism- based method can have dramatic suboptimality in certain problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Example 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Let A = Bd−1, B = Bd−1 × [0, 1], and Θ = {(θ−d, θd) | θ−d ∈ Bd, θd = 1 − ∆} (24) for some ∆ ∈ (0, 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Consider the linear family HΘ,φ with φ(a, b) = ∥a∥((1 − bd)a, bd − ∥b−d∥) + 1 − ∥a∥ 2 (b−d, 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' (25) For any θ ∈ Θ with θ−d ∈ Sd−1, the optimal action for the leader is θ−d, with the follower best responding (0, 0) and achieving unit expected reward.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' When ∥a∥ = 1, this function behaves exactly as in Example 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='2, where b∗ θ(a) = (0, 1) whenever θ−d · a < 1 − ∆;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' When a = 0, the best response is b∗ θ(0) = (θ−d, bd).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Thus, if the response observations are noiseless, the leader learns the true parameter and hence the optimal action in one round by playing a1 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' However, any optimism-based method such as UCB will not achieve such efficient learning, even when the response are noiselessly observed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' It is straightforward to verify that, for any action a with ∥a∥ < 1, the optimistic reward satisfies sup θ∈Θ hθ(a) < 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' (26) Thus, as long as the confidence set contains some θ with θ−d ∈ Sd−1, which holds under our initial condition, optimism causes the leader to only take actions a ∈ Sd−1, reducing the problem to the worst-case Example 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' 6 Conclusions We have studied a model of online learning in decentralized cooperative Stackelberg games.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' We showed that, even with an omniscient follower who always best responds (myopically), the worst case sample complexity for a linear family can be as large as exp(Θ(d log 1 ǫ)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' This “curse of expertise” highlights the challenge caused by miscoordinated exploration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' This also raises the question of how a non-myopic expert follower should respond to the leader’s actions (without knowing the leader’s exact algorithm) to expedite their learning and maximize their long-term reward.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' We considered the UCB-type algorithm that incorporates response observations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' A few examples of various hardness were considered, ranging from efficient learning through imitation and guided exploration to the worst-case linear family example with an exponential sample complexity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Besides the examples considered in the paper, there are numerous scenarios where the roles of the leader and the follower are more complex to reason about.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' This poses unique challenges for both the learning process of the leader and the subsequent analysis of regret, indicating a fertile ground for future research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Specifically, our current template of Algorithm 1 requires designing the confidence sets based on the specific response-reward structure of each problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' It remains open to find a general design (or prove the lack thereof) that systematically synthesizes the response and reward observations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' A general framework of analysis that can provide a unified yet sharp upper bound on the examples is also valuable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' 12 References Yasin Abbasi-Yadkori, D´avid P´al, and Csaba Szepesv´ari.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Improved algorithms for linear stochas- tic bandits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Advances in neural information processing systems, 24, 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Peter Auer, Nicolo Cesa-Bianchi, and Paul Fischer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Finite-time analysis of the multiarmed bandit problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Machine learning, 47(2):235–256, 2002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Yu Bai, Chi Jin, Huan Wang, and Caiming Xiong.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Sample-efficient learning of Stackelberg equilibria in general-sum games.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Advances in Neural Information Processing Systems, 34: 25799–25811, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Vincent Conitzer and Tuomas Sandholm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Computing the optimal strategy to commit to.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' In Proceedings of the 7th ACM Conference on Electronic Commerce, pages 82–90, 2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Jinshuo Dong, Aaron Roth, Zachary Schutzman, Bo Waggoner, and Zhiwei Steven Wu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Strategic classification from revealed preferences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' In Proceedings of the 2018 ACM Conference on Economics and Computation, pages 55–70, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Kefan Dong, Jiaqi Yang, and Tengyu Ma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Provable model-based nonlinear bandit and reinforce- ment learning: Shelve optimism, embrace virtual curvature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Advances in Neural Information Processing Systems, 34:26168–26182, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Jacques Ferber and Gerhard Weiss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Multi-agent systems: an introduction to distributed artificial intelligence, volume 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Addison-wesley Reading, 1999.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Jerzy Filar and Koos Vrieze.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Competitive Markov decision processes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Springer Science & Busi- ness Media, 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Dylan J Foster, Sham M Kakade, Jian Qian, and Alexander Rakhlin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' The statistical complexity of interactive decision making.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' arXiv preprint arXiv:2112.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='13487, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Matthias Gerstgrasser and David C Parkes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Oracles & followers: Stackelberg equilibria in deep multi-agent reinforcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' arXiv preprint arXiv:2210.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='11942, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Michael A Goodrich, Alan C Schultz, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Human–robot interaction: a survey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Foundations and Trends® in Human–Computer Interaction, 1(3):203–275, 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Moritz Hardt, Nimrod Megiddo, Christos Papadimitriou, and Mary Wootters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Strategic classi- fication.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' In Proceedings of the 2016 ACM conference on Innovations in Theoretical Computer Science, pages 111–122, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Chien-Ju Ho, Aleksandrs Slivkins, and Jennifer Wortman Vaughan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Adaptive contract design for crowdsourcing markets: Bandit algorithms for repeated principal-agent problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' In Proceedings of the fifteenth ACM conference on Economics and computation, pages 359–376, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Baihe Huang, Kaixuan Huang, Sham Kakade, Jason D Lee, Qi Lei, Runzhe Wang, and Jiaqi Yang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Optimal gradient-based algorithms for non-concave bandit optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Advances in Neural Information Processing Systems, 34:29101–29115, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Hsu Kao, Chen-Yu Wei, and Vijay Subramanian.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Decentralized cooperative reinforcement learning with hierarchical information structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' In International Conference on Algorithmic Learning Theory, pages 573–605.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' PMLR, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Robert Kleinberg and Tom Leighton.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' The value of knowing a demand curve: Bounds on regret for online posted-price auctions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' In 44th Annual IEEE Symposium on Foundations of Computer Science, 2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Proceedings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=', pages 594–605.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' IEEE, 2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' 13 Jens Kober, J Andrew Bagnell, and Jan Peters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Reinforcement learning in robotics: A survey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' The International Journal of Robotics Research, 32(11):1238–1274, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' John Langford and Tong Zhang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' The epoch-greedy algorithm for contextual multi-armed ban- dits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Advances in neural information processing systems, 20(1):96–1, 2007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Niklas Lauffer, Mahsa Ghasemi, Abolfazl Hashemi, Yagiz Savas, and Ufuk Topcu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' No-regret learning in dynamic Stackelberg games.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' arXiv preprint arXiv:2202.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='04786, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Continuous control with deep reinforcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' arXiv preprint arXiv:1509.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='02971, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Yang Liu and Yiling Chen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' A bandit framework for strategic regression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Advances in Neural Information Processing Systems, 29, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Janusz Marecki, Gerry Tesauro, and Richard Segal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Playing repeated Stackelberg games with unknown opponents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' In Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems-Volume 2, pages 821–828, 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Nived Rajaraman, Yanjun Han, Lin Yang, Jingbo Liu, Jiantao Jiao, and Kannan Ramchandran.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' On the value of interaction and function approximation in imitation learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Advances in Neural Information Processing Systems, 34:1325–1336, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Daniel Russo and Benjamin Van Roy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Eluder dimension and the sample complexity of optimistic exploration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Advances in Neural Information Processing Systems, 26, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Ahmad EL Sallab, Mohammed Abdou, Etienne Perot, and Senthil Yogamani.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Deep reinforce- ment learning framework for autonomous driving.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Electronic Imaging, 2017(19):70–76, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Shai Shalev-Shwartz, Shaked Shammah, and Amnon Shashua.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Safe, multi-agent, reinforcement learning for autonomous driving.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' arXiv preprint arXiv:1610.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='03295, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Milind Tambe.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Security and Game Theory: Algorithms, Deployed Systems, Lessons Learned.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Cambridge University Press, 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Heinrich von Stackelberg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Market Structure and Equilibrium.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Springer Science & Business Media, 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Martin J Wainwright.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' High-Dimensional Statistics: A Non-Asymptotic Viewpoint, volume 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Cambridge University Press, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Chih-Chun Wang, Sanjeev R Kulkarni, and H Vincent Poor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Bandit problems with side obser- vations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' IEEE Transactions on Automatic Control, 50(3):338–355, 2005.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Michael Wooldridge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' An introduction to multiagent systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' John wiley & sons, 2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Annie Xie, Dylan Losey, Ryan Tolsma, Chelsea Finn, and Dorsa Sadigh.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Learning latent representations to influence multi-agent interaction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' In Conference on robot learning, pages 575–588.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' PMLR, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Boling Yang, Liyuan Zheng, Lillian J Ratliff, Byron Boots, and Joshua R Smith.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Stackelberg maddpg: Learning emergent behaviors via information asymmetry in competitive games.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Yaolong Yu, Haifeng Xu, and Haipeng Chen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Learning correlated Stackelberg equilibrium in general-sum multi-leader-single-follower games.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' arXiv preprint arXiv:2210.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='12470, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' 14 Kaiqing Zhang, Zhuoran Yang, and Tamer Ba¸sar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Multi-agent reinforcement learning: A selec- tive overview of theories and algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Handbook of Reinforcement Learning and Control, pages 321–384, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Han Zhong, Zhuoran Yang, Zhaoran Wang, and Michael I Jordan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Can reinforcement learning find Stackelberg-Nash equilibria in general-sum Markov games with myopic followers?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' arXiv preprint arXiv:2112.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='13521, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Banghua Zhu, Stephen Bates, Zhuoran Yang, Yixin Wang, Jiantao Jiao, and Michael I Jordan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' The sample complexity of online contract design.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' arXiv preprint arXiv:2211.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='05732, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' 15 A Proofs in Section 3 A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='1 Proof of Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='1 Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Consider Example 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' The expected reward is given by hθ(a, b) := θ · φ(a, b) = (1 − b)θ−d · a + b(1 − ∆), (27) Optimizing over b ∈ [0, 1] yields hθ(a) = max{1 − ∆, θ−d · a}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' (28) Note that for any a ∈ A such that θ−d · a < 1 − ∆, the best response of the follower is b = 1, yielding an expected reward of 1−∆;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' for any a ∈ A such that θ−d ·a ≥ 1−∆, the best response of the follower is b = 0, yielding an expected reward of θ−d · a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' The optimal joint response a = θ−d and b = 0 achieves the optimal expected reward of ∥θ−d∥ = 1 > 1 − ∆.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' From the leader’s perspective, this now reduces to the problem of a ReLU bandit considered in Dong et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' [2021], since the response provides no information until the average regret falls below ∆.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Thus we have inf ˆπ sup θ∈Θ R(T) ≥ Ω(T 1− 1 d−2 ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='2 Proof of Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='3 Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Let H(ǫ) be a minimal ǫ-covering of H under the metric ∥ · ∥∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Let A(ǫ) = � arg max a∈A max b∈B h(a, b) | h ∈ H(ǫ) � , where we break ties arbirarily when the optimal action is non-unique.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Note that we have |A(ǫ)| ≤ |H(ǫ)| ≤ N(ǫ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Let h⋆ be the true reward function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' By the definition of a covering, there exists some hǫ ∈ H(ǫ) such that ∥h⋆ − hǫ∥∞ ≤ ǫ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Thus we have R(T) = T � t=1 E[h ⋆(a∗) − h ⋆(at)] ≤ ǫT + T � t=1 E[h ⋆ ǫ(a∗) − h ⋆ ǫ(at)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' We know that the optimal action for hǫ must be inside the set A(ǫ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Thus any worst-case optimal no-regret algorithm on the set A(ǫ) gives a regret of � |A(ǫ)|T ≤ � N(ǫ)T .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' This gives that R(T) ≤ ǫT + � N(ǫ)T .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Taking infimum over ǫ finishes the proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' B Proofs in Section 4 B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='1 Proof of Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='5 Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Recall the notation from Example 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='4: let θ(b) t = ΠA(ˆθt) for t ≥ 2, with ˆθt := 1 t−1 �t−1 i=1 ˆbi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' The first round incurs at most a constant regret and can be ignored.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' It suffices to show that, with probability at least 1 − δ, ∥θ − θ(b) t ∥ ≤ αt √ t (29) 16 for αt = Θ � σb � d + log T δ � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' First, we bound the distance between ˆθt and θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' By our assumption, ∥ˆθt − θ∥ = ��� 1 t − 1 t−1 � i=1 wi ���, where w1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' , wt are i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' zero-mean σb-sub-Gaussian.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' We proceed using a covering argument.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Construct U ⊆ Sd−1 such that inf v∈Sd−1 sup u∈U u · v ≥ 1 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' (30) Note that ∥u − v∥ = √2 − 2u · v for u, v ∈ Sd−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Hence, equivalently, we may choose U as a minimal 1-covering of Sd−1 in Euclidean metric.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Then log |U| ≤ log N int(Sd−1, 1, ∥ · ∥) ≤ log M(Bd, 1, ∥ · ∥) = Θ(d), (31) where N int and M denote the internal covering number and the packing number of the space under a given metric.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' The choice of U ensures that ∥w∥ ≤ 2 sup u∈U u · w (32) for all w ∈ Rd, and ignoring the constant factor, we may focus on upper bounding supu∈U �t−1 i=1 u· wi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' For each choice of u ∈ U, let Zu,i = u · wi, so that Zu,1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' , Zu,t−1 are i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' zero-mean σb-sub-Gaussian by definition of sub-Gaussian random vectors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' By Hoeffding’s inequality for sub-Gaussian random variables, we have P � t � i=1 Zu,i > x � ≤ exp � − x2 2tσ2 b � (33) for all x > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Applying union bound over U and using (32) gives P ����� t � i=1 wi ���� ≥ 2x � ≤ P � sup u∈U t � i=1 Zu,i ≥ x � ≤ |U| exp � − x2 2tσ2 b � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' (34) Choosing x = σb � 2t log(|U|T) ≲ σb � t(d + log T δ ) ensures that, by another union bound over t ∈ [T], ∥ˆθt − θ∥ ≲ σb � t−1� d + log T δ � (35) with probability at least 1 − δ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' By the triangle inequality and the definition of projection, ∥θ(b) t − θ∥ ≤ ∥θ(b) t − ˆθt∥ + ∥ˆθt − θ∥ ≤ 2∥ˆθt − θ∥ ≲ σb � t−1� d + log T δ � (36) with the same probability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' This gives (29) and completes the proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='2 Proof of Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='6 Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' We will condition upon the validity of the confidence sets, which happens with probability at least 1 − δ per our choice of {αt}t∈[T].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' 17 UCB always chooses at in the confidence set Θt, with radius of order O � σb � t−1(d + log T δ ) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' When θ⋆ ∈ Θt, we have ∥at − θ⋆∥ ≲ σb � t−1(d + log T δ ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Since both at and θ⋆ are unit vectors, we have RUCB(T) ≤ 2δT + T � t=1 � 1 − θ⋆ · at � = 2δT + 2 + 1 2 T � t=1 ∥θ⋆ − at∥2 ≲ 2δT + T � t=2 σ2 b t � d + log T δ � = O � δT + σ2 b log T · � d + log T δ �� , where the term 2δT bounds the contribution of the event that the confidence sets fails to be all valid.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Choosing δ = 1/T gives our desired bound.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='3 Proof of Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='10 Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' After the first round, the leader’s task reduces to a linear bandit with action space A1: only actions within A1 will be played, and the reward is linear in this region.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' As is well known for linear bandit (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=', Russo and Van Roy [2013]), with probability 1 − δ, the regret in this linear stage (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=', excluding the first round) is upper bounded by 2δT + O �� d log T · (d log T + log δ−1) · T � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' The first round adds at most a constant to this and can be ignored.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' By choosing δ = T −1, we have RUCB(T) ≤ �O(d √ T).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' (37) B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='4 Proof of Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='11 Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Let Θ1 = {θa ∈ Sd−1|θa · b1 ≥ ζ} × {b1}, and denote the true parameter by θ⋆ = (θ⋆ a, θ⋆ b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' By our assumption on the problem structure, we have θ⋆ a ∈ Θ(b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' As in the proof of Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='3, let Θ(ǫ) be a minimal ǫ-covering of Θ1 in Euclidean metric, with ǫ > 0 to be specified later.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' In particular, there is some ˜θa ∈ Θ1 with ∥˜θa − θ⋆ a∥ ≤ ǫ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Let A(ǫ) = {arg maxa∈A ReLU(θa · a − ∆) | θa ∈ Θ(ǫ)}, where we break tie arbitrarily when the optimal action is non-unique.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Note that |A(ǫ)| ≤ |Θ(ǫ)| = N(Θ1, ǫ, ∥ · ∥).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Now, let the leader play UCB on the discrete action set A(ǫ) after the first round.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' The regret satisfies R(T) ≤ 1 + T � t=2 E � h ⋆(a∗) − h ⋆(at) � ≤ 1 + T · E � h ⋆(a∗) − h ⋆(˜a∗) � + T � t=1 E � h ⋆(˜a∗) − h ⋆(at) � , (38) where a∗ = θ⋆ a and ˜a∗ ∈ arg maxa∈A(ǫ) h ⋆(a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Note that h ⋆(˜a∗) ≥ h ⋆(˜θa) ≥ h ⋆(a∗) − ǫ by our choice of ˜θa and A(ǫ), the second term in (38) is at most ǫT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' The third term, the regret of UCB on A(ǫ), is bounded by O( � N(Θ1, ǫ, ∥ · ∥) · T) in expectation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' It remains to bound N(Θ1, ǫ, ∥ · ∥).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Note that for any θa, θ′ a ∈ Θ1, we have θa · θ′ a = (θa · b1)(θ′ a · b1) + (θa − (θa · b1)b1) · (θ′ a − (θ′ a · b1)b1) ≥ ζ2 − ∥θa − (θa · b1)b1∥∥θ′ a − (θ′ a · b1)b1∥ ≥ ζ2 − (1 − ζ2) = 2ζ2 − 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Equivalently, ∥θa − θ′ a∥ = � 2 − 2θa · θ′a ≤ 2 � 1 − ζ2 = 2Cζ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Thus, the covering number of Θ1 is upper bounded by � KCζ ǫ d) for some absolute constant K, which yields a regret bound 18 of 1 + ǫT + O( � KdCd ζ T/ǫd).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Choosing ǫ ≍ (KCζ) d d+2T − 1 d+2 reduces this upper bound to O � C d d+2 ζ T d+1 d+2 � as desired.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' C Proofs in Section 5 C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='1 Proof of Proposition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='2 Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Let the leader run the phased elimination algorithm Huang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' [2021, Algorithm 6] using the response b∗ θ(at) as the proxy reward to maximize.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' This proxy reward, in expectation, is a homogeneous polynomial of degree 2k − 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' By Corollary 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='16 in Huang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' [2021], the algorithm achieves �R(T) ≤ �O �√ d2k−1T � , (39) where �R(T) = �T t=1 1 − b∗ θ(at) is the proxy regret measured based on the the proxy reward (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=', absolute response).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Note that the reward is maximized exactly when the proxy reward is maximized.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' Thus, the Lipschitz property (19) suggests that R(T) ≤ 2k 2k − 1 �R(T) ≤ �O( √ d2k−1T).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
+page_content=' (40) 19' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'}
diff --git a/itFKT4oBgHgl3EQfvy5I/content/2301.11896v1.pdf b/itFKT4oBgHgl3EQfvy5I/content/2301.11896v1.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..71dbd6b7944926f2ba83f0193a4ac94cd26eaa46
--- /dev/null
+++ b/itFKT4oBgHgl3EQfvy5I/content/2301.11896v1.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:acd2906e7fcc756e85ba6d9dad6c924612afb42ba51a7a23080ad73694ca6128
+size 886881
diff --git a/itFKT4oBgHgl3EQfvy5I/vector_store/index.faiss b/itFKT4oBgHgl3EQfvy5I/vector_store/index.faiss
new file mode 100644
index 0000000000000000000000000000000000000000..3aca23907af9ce27f23c4d5d544ba7d832a007ea
--- /dev/null
+++ b/itFKT4oBgHgl3EQfvy5I/vector_store/index.faiss
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ec9d71f9b087c06e75a420ebe3db000ef4c92a54102a2360a0375bc5ac8a2ee6
+size 3604525
diff --git a/j9AyT4oBgHgl3EQfyPkN/content/2301.00679v1.pdf b/j9AyT4oBgHgl3EQfyPkN/content/2301.00679v1.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..0d96e778b30f8308a7e655a81bd7cdde90f74d3a
--- /dev/null
+++ b/j9AyT4oBgHgl3EQfyPkN/content/2301.00679v1.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:803b66bb29a9d1b4d273bd6d798ecaeb2f35ea06c58d73a89efe9e34e1e7b136
+size 154990
diff --git a/j9AyT4oBgHgl3EQfyPkN/vector_store/index.faiss b/j9AyT4oBgHgl3EQfyPkN/vector_store/index.faiss
new file mode 100644
index 0000000000000000000000000000000000000000..be99b86f8b2ea415712adbcc5b0734ef62800758
--- /dev/null
+++ b/j9AyT4oBgHgl3EQfyPkN/vector_store/index.faiss
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:43c43e6cceab14c15b454dbaf53512c43d4d54698091c0a9a76d0e5e80a8c7dd
+size 1048621
diff --git a/j9AyT4oBgHgl3EQfyPkN/vector_store/index.pkl b/j9AyT4oBgHgl3EQfyPkN/vector_store/index.pkl
new file mode 100644
index 0000000000000000000000000000000000000000..07121638590d7acc6e67a6099b6ed8871419cc68
--- /dev/null
+++ b/j9AyT4oBgHgl3EQfyPkN/vector_store/index.pkl
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:34ccf52bfec96c6e15fd06eb3755da4ef473928664039d58a2ce16fb399d6163
+size 40817
diff --git a/j9FQT4oBgHgl3EQfmDaC/content/2301.13364v1.pdf b/j9FQT4oBgHgl3EQfmDaC/content/2301.13364v1.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..f444c0a8eb571b41abe7e69393cc8b14a07d8ecf
--- /dev/null
+++ b/j9FQT4oBgHgl3EQfmDaC/content/2301.13364v1.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c1e1315be3a3ca4fad6f8383965c2767c975a3ecca7b861b5af2860a0e657c5d
+size 930892
diff --git a/j9FQT4oBgHgl3EQfmDaC/vector_store/index.faiss b/j9FQT4oBgHgl3EQfmDaC/vector_store/index.faiss
new file mode 100644
index 0000000000000000000000000000000000000000..82052b7922274798ea3c36e901fea0d0d9d3bda1
--- /dev/null
+++ b/j9FQT4oBgHgl3EQfmDaC/vector_store/index.faiss
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ab6d14902e4da23f1ccaea09664f61818444a93ac1db02e4485b7c9b6a650b22
+size 3735597
diff --git a/j9FQT4oBgHgl3EQfmDaC/vector_store/index.pkl b/j9FQT4oBgHgl3EQfmDaC/vector_store/index.pkl
new file mode 100644
index 0000000000000000000000000000000000000000..fd9ea55556ab82b3cf15cca809133345acef6fbf
--- /dev/null
+++ b/j9FQT4oBgHgl3EQfmDaC/vector_store/index.pkl
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5c5a1e77566f9e7d6fff2e0a06b58e2dc996c35e8cb6e706c308336ed90d375a
+size 149226
diff --git a/kdFQT4oBgHgl3EQfmjaf/content/2301.13366v1.pdf b/kdFQT4oBgHgl3EQfmjaf/content/2301.13366v1.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..225eb6920f88ea7e21a280785ebe3c9866aa1260
--- /dev/null
+++ b/kdFQT4oBgHgl3EQfmjaf/content/2301.13366v1.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a1edddae4983d066a283ad0f9a299e57258491cf810bb76a1c18015ad69eb945
+size 1128650
diff --git a/kdFQT4oBgHgl3EQfmjaf/vector_store/index.faiss b/kdFQT4oBgHgl3EQfmjaf/vector_store/index.faiss
new file mode 100644
index 0000000000000000000000000000000000000000..6cd462cffb5f841b09b8c373c620e3a440c918de
--- /dev/null
+++ b/kdFQT4oBgHgl3EQfmjaf/vector_store/index.faiss
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:52190547b3f514076418c0f248e3ee1d4f090b2be9bd44b03ea78138df3b2866
+size 5111853
diff --git a/kdFQT4oBgHgl3EQfmjaf/vector_store/index.pkl b/kdFQT4oBgHgl3EQfmjaf/vector_store/index.pkl
new file mode 100644
index 0000000000000000000000000000000000000000..07ad7766a73e1c6cd8c224de330847e0c33e031c
--- /dev/null
+++ b/kdFQT4oBgHgl3EQfmjaf/vector_store/index.pkl
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7889551214ef1007d9d3f3c7fcd23cf4e7941dc5c8c98b79ceb99b260a2b4322
+size 159555
diff --git a/lNFPT4oBgHgl3EQf2zXD/content/2301.13188v1.pdf b/lNFPT4oBgHgl3EQf2zXD/content/2301.13188v1.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..cad2779f13fdc765eff5fa8345558fe7621b4e69
--- /dev/null
+++ b/lNFPT4oBgHgl3EQf2zXD/content/2301.13188v1.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:10e53f4e6284b0653b85fff9426898d9799caccaf5379b24f9f0de00719caacf
+size 9410277
diff --git a/m9E1T4oBgHgl3EQf1QUs/content/2301.03465v1.pdf b/m9E1T4oBgHgl3EQf1QUs/content/2301.03465v1.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..86933dc8d98c5e098deda4cd7624b463b49782fe
--- /dev/null
+++ b/m9E1T4oBgHgl3EQf1QUs/content/2301.03465v1.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:de62c26ab227069b0cdcd3461633e81b66cd1ab0cb6bd222104d516faac7099f
+size 3524269
diff --git a/n9E0T4oBgHgl3EQfZgDd/vector_store/index.pkl b/n9E0T4oBgHgl3EQfZgDd/vector_store/index.pkl
new file mode 100644
index 0000000000000000000000000000000000000000..dc2552fe00e0438049b7a608a4555ca1dbd13876
--- /dev/null
+++ b/n9E0T4oBgHgl3EQfZgDd/vector_store/index.pkl
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8e74707412d91b38956233795034740640274d368dad30cc143de8761a43a74a
+size 247084
diff --git a/n9E3T4oBgHgl3EQfLAl3/vector_store/index.faiss b/n9E3T4oBgHgl3EQfLAl3/vector_store/index.faiss
new file mode 100644
index 0000000000000000000000000000000000000000..66c7ea250fc85f8b3cb70e5397918ac1c0eaefde
--- /dev/null
+++ b/n9E3T4oBgHgl3EQfLAl3/vector_store/index.faiss
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e14945d05ebd62cf59061568964ad20a4a42513b877d9117bd5f2197ebb63977
+size 1376301
diff --git a/n9E_T4oBgHgl3EQf8Byb/content/tmp_files/2301.08373v1.pdf.txt b/n9E_T4oBgHgl3EQf8Byb/content/tmp_files/2301.08373v1.pdf.txt
new file mode 100644
index 0000000000000000000000000000000000000000..d3d8a6ec51118dc6ecd3dc5aa1711363840178cd
--- /dev/null
+++ b/n9E_T4oBgHgl3EQf8Byb/content/tmp_files/2301.08373v1.pdf.txt
@@ -0,0 +1,1353 @@
+Turing pattern or system heterogeneity? A
+numerical continuation approach to assessing the
+role of Turing instabilities in heterogeneous
+reaction-diffusion systems
+Jacob C. Vandenberg∗
+Mark B. Flegg†
+January 23, 2023
+Abstract
+Turing patterns in reaction-diffusion (RD) systems have classically
+been studied only in RD systems which do not explicitly depend on in-
+dependent variables such as space. In practise, many systems for which
+Turing patterning is important are not homogeneous with ideal boundary
+conditions. In heterogeneous systems with stable steady states, the steady
+states are also necessarily heterogeneous which is problematic for applying
+the classical analysis. Whilst there has been some work done to extend
+Turing analysis to some heterogeneous systems, for many systems it is still
+difficult to determine if a stable patterned state is driven purely by sys-
+tem heterogeneity or if a Turing instability is playing a role. In this work,
+we try to define a framework which uses numerical continuation to map
+heterogeneous RD systems onto a sensible nearby homogeneous system.
+This framework may be used for discussing the role of Turing instabili-
+ties in establishing patterns in heterogeneous RD systems. We study the
+Schnakenberg and Gierer-Meinhardt models with spatially heterogeneous
+production as test problems. It is shown that for sufficiently large system
+heterogeneity (large amplitude spatial variations in morphogen produc-
+tion) it is possible that Turing-patterned and base states become coinci-
+dent and therefore impossible to distinguish. Other exotic behaviour is
+also shown to be possible. We also study a novel scenario in which mor-
+phogen is produced locally at levels that could support Turing patterning
+but on intervals/patches which are on the scale of classical critical do-
+main lengths. Without classical domain boundaries, Turing patterns are
+allowed to bleed through; an effect noted by other authors. In this case,
+this phenomena effectively changes the critical domain length. Indeed, we
+even note that this phenomena may also effectively couple local patches
+together and drive instability in this way.
+∗School of Mathematical Sciences, Monash University, Clayton, Victoria 3800, Australia.
+†School of Mathematical Sciences, Monash University, Clayton, Victoria 3800, Australia.
+1
+arXiv:2301.08373v1 [math.AP] 20 Jan 2023
+
+1
+Introduction
+The reaction-diffusion (RD) equation is a nonlinear partial differential equation
+which exhibits extraordinary diverse behavior observed particularly in the life
+sciences [13, 12, 10]. It models the concentration of different species in time as
+they interact whilst diffusing in space relative to each other. The species of the
+system could refer to a chemical species, biological species or ecological species,
+amongst other possibilities [8].
+Under certain conditions, solutions to the RD equation can have an insta-
+bility which is “driven by diffusion”. This is called a Turing instability, which is
+usually defined as follows. Turing instabilities occur when an RD system has a
+spatially-uniform steady state which is unstable in the presence of diffusion, but
+stable in the absence of diffusion. Alan Turing’s seminal paper analyses Turing
+instabilities as a mechanism for explaining the emergence of spatial heterogene-
+ity in diffuse biological chemical systems [14]. The reason Turing instabilities
+can explain this onset of heterogeneity is because they typically produce Turing
+patterns. Turing patterns are stable solutions to the RD equation which have
+large spatial oscillations, and are stationary in time. Usually diffusion has the
+effect of “flattening” the solution. In this case, however, diffusion is what causes
+the system to deviate away from uniformity.
+Often, RD models are spatially homogeneous in the sense that the RD PDE
+does not explicitly contain the spatial variable x (or t). Typically, RD models
+which exhibit Turing patterning are studied as homogeneous systems to sim-
+plify the analysis of the PDE (finding steady states, performing linear stability
+analysis, demonstrating the potential for patterning etc.). At the same time,
+most real world applications almost certainly contain spatial variation in model
+parameters. Consider, for example, the patterning and development of digits,
+kidneys and lungs where homogeneous models are analysed for the presence of
+Turing instabilities despite there being obvious spatial heterogeneity in mor-
+phogen production rates [7, 11].
+Understanding Turing patterning in the presence of spatially heterogeneous
+RD PDEs is not well understood and surprisingly has received very little at-
+tention in the literature. Perhaps, one of the reasons for this is that Turing
+analysis of spatially heterogeneous RD PDEs is challenging as it is not even
+necessarily apparent even how Turing instabilities should be defined. To begin,
+the unstable uniform steady state required for defining the Turing instability
+does not exist by definition for spatially heterogeneous RD PDEs.
+The analysis by Krause et al. presents a general stability theory for a hetero-
+geneous RD PDE. This paper is however limited to cases where heterogeneity
+varies slowly almost everywhere relative to the domain size [6]. In the paper,
+Krause et al. define a ‘base state’ solution which replaces the notion of the uni-
+form steady state which has been ‘flattened’ by diffusion. The base state, which
+must be a stationary solution to the PDE, has certain properties. Importantly,
+the base state does not have spatial oscillations with periods much smaller than
+the inhomogeneity in the PDE (it is nice and ‘diffused’). Aside from this defi-
+nition being vague, it is not clear that it should be the case if the PDE contains
+2
+
+heterogeneities which vary on the same spatial scale as the Turing patterns for
+the system. This is because it is not easy to distinguish between patterned and
+base states if oscillations in the patterned state are on the same spatial scale as
+the base state. We shall also be adopting the term ‘base state’ but attempting
+to find a more general approach to finding it.
+Another method which has been widely used in the literature is to limit the
+scope of the study to more specific examples. This includes choosing specific
+reaction terms such that an exact solution can be computed [9, 1]. At this point,
+a stability analysis similar to the classical analysis can be performed. Using a
+linear reaction term is common [9, 5], but nonlinear reaction terms can also be
+considered [1]. Truncated Galerkin expansions of the solution have been used
+to study the stability of heterogeneous problems [4, 5]. These too use specific
+examples to find base states analytically.
+No insight is given as to why the
+solutions that were found should be analogous to the uniform base state in the
+homogeneous case.
+In this manuscript, our aim is to investigate a method which may be used
+to find base states for heterogeneous reaction-diffusion PDEs. The stability of
+these base states may be used to define Turing patterns. We propose a method
+for describing base states and apply this method to the canonical Schnaken-
+berg (substrate depletion) system as well as the Gierer-Meinhardt (activator-
+inhibitor) system. In both of these systems we allow the production of species
+to vary in space. We focus on two main curiosities. The first deals with critical
+phenomena which place limitations on when a base state may be defined and
+the second deals with the onset of critical domain lengths for Turing instabilities
+in the presence of heterogeneous production.
+2
+Methods
+The classical spatially-homogeneous dimensionless reaction-diffusion system is
+∂u
+∂t = D∇2u + γF(u), on Ω,
+(1)
+∇u · n = 0, on ∂Ω.
+(2)
+Here u is a vector containing the concentration of model species/chemicals, D
+is a diagonal matrix of diffusion constants (with D11 = 1 providing a charac-
+teristic timescale for nondimensionalisation), and F is a nonlinear vector-valued
+function describing the possible sources and sinks of, and reactions between, the
+species. The domain Ω (which has an outward normal vector n) has been scaled
+through non-dimensionalisation so that the spatial scale of the system relative
+to that of diffusion is described by the magnitude of γ.
+A Turing analysis of this system begins by finding the uniform steady state
+solution u⋆ such that F(u⋆) = 0. Indeed this uniform state is a solution to the
+model because derivatives of u⋆ (a constant) is zero. Subsequently, a Turing
+pattern is formed when the solution u⋆, which is stable if D = 0, is unstable.
+3
+
+The uniform solution to the model u⋆ will be called the base state and in hetero-
+geneous problems loses its uniformity. This is the natural, diffusion-flattened,
+state of the system.
+We can extend the RD model to account for explicit spatial variation
+∂u
+∂t = div(D(x)∇u) + γF(u, x), on Ω,
+(3)
+∇u · n = 0 on ∂Ω.
+(4)
+If we were to proceed as before, we can take u⋆(x) which satisfies F(u⋆(x), x) =
+0 for all x ∈ Ω. The diffusion term div(D(x)∇u⋆(x)) is not zero in general,
+which would mean u⋆(x) is not a steady state solution of Equation (3). Thus, it
+does not make sense to analyse its stability. So in order to extend the definition
+of a Turing instability, we need to find a different base state u⋆(x) which satisfies
+the steady state problem for Equations (3) and (4) but also should not be called
+a Turing pattern. Whilst a ‘pattern’ is often defined as any stable stationary
+heterogeneous solution, we reserve the definition of pattern in this manuscript
+to describe any stationary heterogeneous state separate to the base state.
+As it stands, there is no conventional way of finding or defining more gen-
+erally what this base state is. The only thing that can be said about the base
+state u⋆(x) is that it should be somehow sensibly analogous to the uniform base
+state described for the homogeneous system.
+We will narrow the scope of our efforts to investigate this system to the case
+where heterogeneity is in the reaction term only. Specifically, we look at systems
+with heterogeneous production rates of each species as we believe that this
+system is ubiquitous in biological application where morphogen is deferentially
+expressed in space but reactions between morphogens are autonomous as one
+might expect. Thus, the form of the RD equation that we will be analysing is as
+follows and splits F up into autonomous, homogeneous ˆF and heterogeneous G
+components. How this partition should be done appropriately and uniquely we
+will discuss here, outlining the approach that we have taken, but we will justify
+this approach in Section 2.1.
+∂u
+∂t = D∇2u + γ
+�
+ˆF(u) + G(u, x)
+�
+, on Ω,
+(5)
+∇u · n = 0 on ∂Ω.
+(6)
+To analyse this system, we will find it useful to ‘grow’ the heterogeneous com-
+ponents by means of a parameter θ by defining the parameterised problem
+∂u
+∂t = D∇2u + γ
+�
+ˆF(u) + θG(u, x)
+�
+, on Ω,
+(7)
+∇u · n = 0 on ∂Ω.
+(8)
+Importantly, the parameter θ in these models describe the amplitude of the
+heterogeneity in the system and when θ → 0 a classical system is recovered and
+when θ → 1 the full heterogeneous problem is recovered. Importantly, as θ may
+4
+
+be thought of as the amplitude of the heterogeneity and easily absorbed into G,
+it is possible to also think of θ growing beyond 1 and simply forming part of a
+growing G in Equations (5) and (6).
+Whilst there is freedom in the choice of the partition of F in Equation (3)
+into G and ˆF in Equation (5), we find it appropriate to uniquely define G and
+ˆF for a given F in the following way.
+ˆF = 1
+|Ω|
+�
+Ω
+F(u, x) ∂x,
+(9)
+G = F − ˆF.
+(10)
+This is a convenient choice when the reaction term can be decomposed into
+a spatially-independent coupling term and a spatially-dependent source term,
+resulting in the following.
+F(u, x) = ˆF(u) + G(x),
+where the average value of G is 0. Furthermore, by using this decomposition for
+F, we ensure that for each θ the parameterised system (Equation (7)) adheres
+to the same decomposition rules whilst at the same time capturing autonomous
+reactions in F within ˆF and often it is these terms which are the characteristi-
+cally important ingredients in the Turing behaviour of the system (noting that
+F → ˆF as θ → 0).
+2.1
+Base states
+In this section, we attempt to redefine the base state of a heterogeneous reaction-
+diffusion system as a parameterised continuation of a nearby homogeneous sys-
+tem.
+A necessary condition on the base state of a reaction-diffusion system
+(Equations (7) and (8)) is that it must be a stationary solution, against which
+stability can be later checked.
+The base state of Equations (7) and (8) shall be labelled as u⋆
+θ(x) (and
+sometimes as u⋆
+θ(x; θ) to highlight dependence on the parameter θ). We have
+that u⋆
+θ(x) is a solution to
+D∇2u + γ
+�
+ˆF(u) + θG(u, x)
+�
+= 0 on Ω,
+(11)
+∇u · n = 0 on ∂Ω.
+(12)
+Since the base state should become the uniform steady state as θ → 0, we have
+that u⋆
+0 ∈ RNs (where Ns is the number of species in the model) is constant in
+x and ˆF(u⋆
+0) = 0.
+It makes sense to represent Equations (11) and (12) as the single equation
+Φ(u, x, ¯x; θ) =
+�
+D∇2u(x) + γ
+�
+ˆF(u(x)) + θG(u(x), x)
+�
+, ∇u(¯x) · n
+�⊤
+= 0,
+(13)
+5
+
+where x ∈ Ω and ¯x ∈ ∂Ω.
+In order to label a solution to 13 as a base solution, we will further require
+that it varies continuously with respect to θ. In this way, the base states of
+the system are tied, via continuation of the parameter θ, to the base state u⋆
+0
+(uniform steady state) of an associated homogeneous system (as θ → 0).
+To ensure the existence of u⋆
+θ for some θ ̸= 0, we can find some η > 0
+and u⋆
+θ : (−η, η) → C2(Ω, RNs) such that u⋆
+θ uniquely solves 13 and u⋆
+0 solves
+ˆF(u⋆
+0) = 0. The value η provides a region where any −η ≤ θ ≤ η is guaranteed to
+have a base state solution. Outside of (−η, η), the amplitude of the heterogeneity
+may become so large that it is not possible to draw a continuation from u⋆
+0.
+We define the Jacobian
+¯Jθ(u, x) = ∂Φ
+∂u = (Jθ(u, x), n · ∇)⊤
+(14)
+=
+�
+D∇2 + γ
+�
+jˆF(u) + θjG(u, x)
+�
+, n · ∇
+�⊤ .
+(15)
+Here jˆF(u) and jG(u, x) are the Jacobians of ˆF and G respectively. For
+continuity and uniqueness of u⋆
+θ in θ at θ = 0 by the Implicit Function Theorem
+(IFT) [3], we require ¯J0 to reversible at u⋆
+0 and therefore, we require that jˆF(u⋆
+0)
+is nonsingular.
+Singularity in ¯Jθ allows for the possibility that θ may become too large in
+magnitude for there to exist a defined base state u⋆
+θ. It’s unclear in general
+how large a heterogeneity (θ) can get before the base state either stops existing
+or is not unique, or even if the base state is even bound in this way at all.
+Defining the base state outside of some potential maximum range θ− < θ < θ+,
+is problematic and in our framework not (yet) possible. The values of θ− and
+θ+ coincide with folds in the solution to Equations (11) and (12) characterised
+by singularities in ¯Jθ− and ¯Jθ+.
+Definition 1 (Spatially-dependent Turing base state). For each u0 ∈ RNs such
+that ˆF(u0) = 0 we define the associated spatially-dependent Turing base state (or
+just base state) for Equation (5) as follows. If there exists u⋆
+θ(x; θ) ∈ C1(Ω ×
+(0, 1], RNs) which is a steady state solution to Equation (7) for all θ ∈ (0, 1]
+and where u⋆
+0(x; 0) = u0. Then u⋆
+1(x) is a Turing base state to the spatially-
+dependent RD system (Equations (5) and (6)) associated with the uniform base
+state u⋆
+0(x).
+Defining the base state in this way is a natural extension of the classical
+homogeneous case, since the heterogeneous base state should not deviate too far
+from the uniform one in the situation where the amplitude of the heterogeneity
+in the system is small. In other words, if heterogeneity in the system is small,
+we would expect that the base state should be almost ‘flat’ from diffusion.
+As an important note, we have chosen to define ˆF and G using Equations
+(9) and (10), in doing so we ensure that all autonomous terms in F (for example
+reaction kinetics between species which drive Turing instabilities) are encapsu-
+lated in ˆF. Clearly, it is possible to simply define G = F and ˆF = 0. With
+6
+
+this choice, we immediately see that jˆF(u⋆
+0) is singular and continuation to the
+heterogeneous base state is impossible.
+In the case where ˆF ̸= 0, we have
+¯J0(u⋆
+0, x) = ∂Φ
+∂u
+����
+u=u⋆
+0,θ=0
+=
+�
+D∇2 + γJˆF(u⋆
+0), n · ∇
+�⊤ .
+We apply this to cj ˆwm, where cj ∈ RNs is the jth eigenvector of Am = −Dk2
+m +
+γJˆF(u⋆
+0) and ˆwm is the eigenfunction solving ∇2 ˆwm = −k2
+m ˆwm on Ω with ∇ ˆwm·
+n = 0 on ∂Ω. This gives us the following,
+�
+D∇2cj ˆwm + γJˆFcj ˆwm, n · ∇cj ˆwm
+�⊤ = (Amcj ˆwm, 0)⊤
+= (λj(Am), 0)⊤ cj ˆwm,
+where λj(Am) is the eigenvalue associated with the eigenvector cj. This eigen-
+value determines the stability of the eigenvector cj ˆwm. So if any eigenvector cj
+has a corresponding λj(Am) = 0, the operator ¯J0 will not be invertible and the
+conditions for the IFT would not be satisfied.
+The continuation of base states from θ = 0 cannot proceed unless G(u⋆
+0, x)
+is orthogonal to every eigenvector in the null space of the adjoint operator ∂Φ
+∂u
+∗.
+That is,
+�
+Ω
+G(u⋆
+0, x)⊤v dx = 0, ∀v ∈ null
+�
+D∇2 + γJ⊤
+ˆF
+�
+.
+This is a result of Fredholm’s alternative [2]. This solvability condition is not
+guaranteed. So for any chosen parameterisation, there may still be cases where
+continuation is impossible about θ = 0.
+We have chosen to multiply the heterogeneity G by a parameter θ. Of course,
+this parameterisation of heterogeneity (θ = 1) from the associated homogeneous
+system (θ = 0) is not unique. In Equations (7) and (8) we increase the size of the
+heterogeneity linearly with the parameter θ. A more general parameterisation
+could be
+∂u
+∂t = D∇2u + γˆF(u) + γG(u, x; θ),
+(16)
+provided that ˆF(u) + G(u, x; 1) ≡ F(u, x), and G(u, x; 0) ≡ 0.
+The IFT only provides information about the existence and uniqueness of
+the base state solution branch locally. The existence and uniqueness of the base
+state solution at θ = 1 is unknown a priori. In particular, it is unknown whether
+changing the parameterisation of G will lead to a change in the base state or
+the existence of the base state. For this, a global homotopy result would be
+required.
+The analysis by Krause et al.
+gives general stability theory for a large
+perturbation in the limit as γ approaches ∞ [6]. However, little attention is given
+on redefining the base state for the Turing instability. The analysis assumes that
+7
+
+a steady state solution to the full RD equation (Equations (3) and (4)) exists,
+and that this solution has certain properties.
+The first property is that the
+solution does not have spatial oscillations on the scale O(1/ϵ). This is an a
+posteriori assumption, since no method is provided for determining whether
+the base state u⋆(x) has O(1/ϵ) oscillations without first finding u⋆(x). Since
+the heterogeneous RD equation is nonlinear in general, finding such a solution
+is non-trivial. Finally, it is assumed that F satisfies the boundary conditions
+∂u
+∂x = 0 at x = 0, 1.
+2.2
+Case studies
+In our numerical investigation, we focus attention on two popular models; the
+Schnakenberg model and the Gierer-Meinhardt model. In their standard ho-
+mogeneous forms, the Schnakenberg model is widely studied as a substrate de-
+pletion Turing system whilst the Gierer-Meinhardt model is a typical activator-
+inhibitor Turing system. In both of these cases we consider only one-dimensional
+domains Ω ∈ (0, 1) on which to solve the PDEs and on the boundaries each of
+the species have no-flux conditions.
+2.2.1
+Schnakenberg model
+The parameterised heterogeneous Schnakenberg model we will be using is as
+follows.
+∂u
+∂t = ∇2u + γ
+�
+−uv2 + β(x)
+�
+,
+(17)
+∂v
+∂t = d∇2v + γ
+�
+uv2 − v + η(x)
+�
+.
+(18)
+Here d represents the relative diffusion of the activator v compared to that of
+the substrate u whilst β and η are spatially dependent production rates. We
+will focus on a particular form of β and η in which we parameterise the scale
+for both the amplitude and frequency of the production heterogeneity
+β(x) = β0 (1 + θ cos(nπx)) ,
+(19)
+η(x) = 1 − β(x).
+(20)
+In this way, at each position a combined dimensionless activator/substrate pro-
+duction of 1 is assumed. The parameter 0 ≤ β0 ≤ 1 describes the average pro-
+portion of this production specific to the substrate and the parameter 0 ≤ θ ≤ 1
+describes the degree of redistribution of the relative production into n periods
+of peaks and troughs on the domain Ω.
+8
+
+2.2.2
+Gierer-Meinhardt model
+The parameterised heterogeneous Gierer-Meinhardt model is given as follows.
+∂u
+∂t = ∇2u + γ
+�u2
+v − bu + a(x)
+�
+(21)
+∂v
+∂t = d∇2v + γ
+�
+u2 − v
+�
+,
+(22)
+This model is controlled by the heterogeneous production rate a(x) of the acti-
+vator u. We will use a periodic heterogeneity of the form
+a(x) = a0 (1 + θ cos(nπx)) ,
+where a0 ∈ R is the average production rate.
+2.3
+Numerical methods
+To generate numerical results we use the numerical continuation method pre-
+sented by Uecker [15] to find solutions of Equations (11) and (12) and by starting
+at u⋆
+0 we find base states for the heterogeneous problem. We begin with the
+statement that Φ(u, x, ¯x; θ) = 0 (u must be a solution to Equations (11) and
+(12)). By differentiating with respect to θ,
+0 = ∂Φ
+∂u
+∂u
+∂θ + ∂Φ
+∂θ .
+So long as ∂Φ
+∂u is nonsingular then ∂θu can be estimated. As such, finding the
+base states (and other steady states of the reaction diffusion system) can easily
+be found using by starting at θ = 0 and incrementing up θ using a forward Euler
+approach
+uθ+∆θ = uθ + ∂uθ
+∂θ ∆θ,
+(23)
+= uθ −
+�∂Φθ
+∂u
+�−1 ∂Φθ
+∂θ ∆θ
+(24)
+where subscripts indicated the value of θ. The solution generated by Equation
+(24) is then corrected to reduce error. This is done by setting uθ+∆θ as the
+initial seed of a Newton solver for the problem Φ = 0.
+We did not find it
+necessary to use more advanced techniques in increasing θ.
+It is possible to skip the approximate update Equation (24) and simply use
+a nonlinear solver on Φ = 0 in the vicinity of uθ. This is, however, not a good
+idea since it significantly increases computational time in the nonlinear solver
+and can sometimes even result in the nonlinear solver finding instead a different
+steady state solution (of which there may be many). In any case, we make use
+of the pde2path package which implements this routine.
+Finally, pde2path determines stability by looking at the sign of the largest
+real component of the eigenvalues of the LHS of the PDE.
+9
+
+In the next section we explore numerical results which give insight into the
+behaviour of Turing systems with heterogeneous production rates. We first look
+at the characteristic behaviour of base states (Section 3.1). Noting that base
+states often terminate for a sufficiently large value of θ with a fold bifurcation,
+it is clear that for some problems if a heterogeneity is large enough a base
+state is not defined using our definition. We therefore have a more thorough
+investigation into what determines if a base state exists or not; what determines
+how large θ can be before a fold bifurcation is reached (Section 3.2). Lastly, how
+heterogeneous production can affect critical domain lengths required for Turing
+patterning (Section 3.3).
+3
+Numerical results and discussion
+3.1
+Continuation of steady states
+The first numerical results illustrate the behaviour of a Schnakenberg Turing sys-
+tem described in Section 2.2.1 as the heterogeneous production term is increased
+in amplitude by tracing the base state and patterned states through numerical
+continuation of the amplitude parameter θ. We will first look at some example
+cases to illustrate the types of branches that can be found. For all the following
+results we will use the following parameters; d = 1/40, β0 = 0.8 and n = 1
+unless otherwise stated. Later we will show results for the Gierer-Meinhardt
+model of Section 2.2.2 where we will use the default parameters d = 20, b = 1
+and a0 = 0.1 unless otherwise stated. When θ = 0 these parameters are known
+to give a Turing instability in the base state. The parameter γ which encodes
+for the domain length, amongst other things, will be varied between examples
+to show how the base state behaves as it varies. In order to visualise the steady
+state solution branches, we will plot the maximum value on the domain of only
+the variable u against the parameter θ. This metric has been chosen arbitrarily
+in order to distinguish between solutions. It is important to remember when
+interpreting these bifurcation plots that the branches are only a projection of
+the infinite dimensional function space onto a single scalar value for plotting
+purposes. Importantly, this means that when branches intersect at non-smooth
+intersections, it is not possible that this is a continuation. Instead, at the point
+of intersection each branch corresponds to completely unrelated functions (other
+than the fact that they share a common maximal value of u).
+In many cases, we observe that there the continuation in θ can generate
+base states indefinitely. We can also observe two main bifurcation events on
+the branch containing the base state. The first of these is a fold at which the
+base state and the stable patterned state emerge. The second is an example of
+a fold, terminating the base state, but where the Turing patterned state never
+bifurcates from base state (they are, instead, perfectly disconnected). By saying
+‘patterned state’ we are implying that there is a branch corresponding to a non-
+homogeneous but also stable steady state (indicated in blue in each figure).
+Finally, we demonstrate some exotic behaviour of the steady states under some
+10
+
+conditions.
+Base state with no limitation
+In the most simple case, starting with u⋆
+0 and growing the heterogeneous term
+by increasing θ in Section 2.2.1, no folds were found in increasing θ from 0 to 1.
+It is important to note that this does not mean that the base states will extend
+for an arbitrarily large θ. For the Schnakenberg system in Section 2.2.1, we find
+that this often occurs for large γ and in Fig. 1 use the value of γ = 900. This
+corresponds with a very large domain in relation to the expected wavelength of
+any Turing patterns. Our value of γ corresponds to a value of ϵ ≈ 1.1 × 10−3 in
+the paper by Krause et al. [6]. We find that in this case the base state exists
+by numerical continuation and furthermore that it is approximately equal to
+the steady state where diffusion is ignored as small which is trivial because it
+is clear from Equations (11) and (12) that unless θ large on the order of γ, for
+large γ, we simply have to leading order that u⋆
+θ solves F(u) + θG(u, x) = 0.
+Base state fold connected to a patterned state
+We observe different behaviour in the base state for non-large γ. If γ is small, but
+not too small as to not observe Turing patterns in the homogeneous Schnaken-
+berg system (due to the domain size being less than the necessary critical domain
+length), then we observe a critical fold in the base state solution. In Fig. 2, we
+use the value of γ = 1. When θ = 0, this corresponds to the case where there is
+just one unstable wavenumber corrsponding to a Turing pattern with just a half
+period on the full domain. In this case, the branch for a patterned state merges
+with the branch of the base state, undergoing a fold bifurcation as seen in Fig.
+2. This means that the base state becomes closer and closer to a patterned state
+until both states are indistinguishable from each other at the fold bifurcation.
+For heterogeneities with an amplitude θ beyond this fold (shown with a green
+dot in Fig. 2), we are unable to objectively define a suitable base state and
+therefore it becomes ambiguous as to whether or not a ‘Turing’ pattern is ob-
+served in the solution of the reaction-diffusion problem. Indeed, whilst a steady
+state solution to the reaction-diffusion equation is expected beyond the fold,
+we do not know where this solution is by numerical continuation from θ = 0
+without significant work. That is, there are other missing branches here and
+it remains unclear if any of these are reasonable candidates to be defined as a
+‘base state’ at this stage and further work here is needed. In Fig. 2, you can see
+the stable patterned state but also an unstable patterned state. For θ = 0 there
+are at least two patterned states. You can see these states in the bifurcation
+diagram as mirrored functions. Interestingly, if the heterogeneity is inverted in
+sign (θ ∈ [−1, 0]), continuation shows a mirror image of the bifurcation diagram
+in Fig. 2.
+11
+
+0.0
+0.2
+0.4
+0.6
+0.8
+1.0
+θ
+0.8
+1.0
+1.2
+1.4
+1.6
+∥u∥∞
+Branch of solutions for γ = 900
+Initial Solution
+Unstable Branch
+Stable Branch
+Figure 1: Schnakenberg system bifurcation diagram for growing heterogeneity
+θ ∈ [0, 1]. Parameters used are characteristic of large domains relative to Turing
+pattern wavelength (γ = 900) with also β0 = 0.8 and d = 1/40. When θ = 0, the
+system solves a classical Turing system where the base state is homogeneous and
+indicated with an ×. As the heterogeneity θ grows, so does the base state. A
+number of examples of the spatial distribution of u along the (red) unstable base
+state u⋆
+θ is displayed. In this case, the base state is allowed to grow continuously
+without a fold. On the other hand, a (blue) stable Turing ‘patterned’ state
+branch is also shown with some displayed distributions of u. This is found by
+solving the full reaction-diffusion equation at θ = 0 and applying the numerical
+continuation.
+12
+
+0.00
+0.02
+0.04
+0.06
+0.08
+0.10
+θ
+0.75
+0.76
+0.77
+0.78
+0.79
+0.80
+0.81
+∥u∥∞
+Branch of solutions for γ = 1
+Initial Solution
+Fold
+Unstable Branch
+Stable Branch
+Stable branch
+Figure 2: Schnakenberg system bifurcation diagram for growing heterogeneity
+θ ∈ [0, 1]. Parameters used are characteristic of small domains relative to Turing
+pattern wavelength (γ = 1) with also β0 = 0.8 and d = 1/40. When θ = 0, the
+system solves a classical Turing system where the base state is homogeneous
+and indicated with an ×. As the heterogeneity θ grows, so does the base state.
+A number of examples of the spatial distribution of u along the (red) unstable
+base state u⋆
+θ is displayed. In this case, the base state merges with the stable
+patterned state at around θ = 0.09. The blue branches are stable patterned
+states but only the solid branch can be obtained by continuing through the
+fold. The dot-dash branch can be found through continuation of a fold in the
+base state if decreasing θ from the θ = 0 base state.
+13
+
+Base state fold not connected to a patterned state
+In intermediate values of γ, more curious behaviour is possible. This is in part
+because these values permit multi-wavelength heterogeneous steady states. In
+Fig. 3 we now display the bifurcation diagram for γ = 9 (analogous to a domain
+length increase of three-fold on the example in Fig. 2). The key observation in
+Fig. 3 is that whilst the base state branch also undergoes a fold bifurcation, the
+solution branch with which it merges is an unstable heterogeneous steady state
+(not a stable pattern). This illustrates that the base state branch can merge
+with another branch which is not a branch of patterned states. In considering
+Fig. 1 where the base state seemingly continues indefinitely without folds, it is
+possible that a fold is present in a similar way to how it appears in Fig. 3 but at
+sufficiently large values of θ. If this is the case, our observations might suggest
+that as γ gets very large, so to does the values of θ where base state folding first
+occurs.
+Exotic behavior
+While the previous examples show two branches originating at θ = 0 converg-
+ing, this does not capture all possibilities. In a more bizarre scenario, we can
+consider the case where γ = 3.61. As shown in Fig. 4, the system undergoes
+many folds before merging with another solution branch which contains θ = 0.
+Furthermore, there are stable steady states which are only present for a discrete
+range of θ values. To demonstrate the behaviour and the way it closes itself, it
+was necessary to continue in both the positive and negative θ direction from u⋆
+0.
+3.2
+Base state existence
+In order to have a discussion about Turing patterns, it is important for a base
+state to exist. It is therefore critical to explore what determines θ+, the maxi-
+mum size that θ can take before a critical point such as a fold is encountered.
+To accomplish this we performed parameter scans on both the Schnakenberg
+and Gierer-Meinhardt model from Sections 2.2.1 and 2.2.2. Our immediate ob-
+servation from doing these scans is that fold bifurcations are very common. In
+particular, we observed more folds when the spatially-dependent source term
+G(u, x) varies explicitly in space with frequencies similar to that of unstable
+eigenvectors in the dispersion relation.
+In Fig. 5 we look at θ+ for the Schnakenberg model (a) and the Gierer-
+Meinhardt model (b). In Fig. 5 (a) we plot θ+ as the scale parameter γ and the
+parameter β0 in the Schnakenberg model are varied, whilst in (b) we instead
+vary the parameter α0 in the Gierer-Meinhardt model. In both cases, we have
+plotted, in red, the curves that relate to eigenvalues Λm = maxjℜ (λj(Am)) = 0
+for m = 1, 2, 3 (for curves left to right). We note that in our test problems we do
+not have strictly imaginary eigenvalues so along these curves ¯J0 is singular and
+we expect that θ+ is not finite. For each constant β0 (or α0) we see that Λm = 0
+at most twice because solving Λm = 0 requires solving a quadratic. Between the
+14
+
+0.00
+0.02
+0.04
+0.06
+0.08
+0.10
+0.12
+0.14
+θ
+0.80
+0.85
+0.90
+0.95
+1.00
+∥u∥∞
+Branch of solutions for γ = 9
+Initial Solution
+Fold
+Unstable Branch
+Stable Branch
+Figure 3: Schnakenberg system bifurcation diagram for growing heterogeneity
+θ ∈ [0, 1]. Parameters used are characteristic of intermediate domains relative
+to Turing pattern wavelength (γ = 9) with also β0 = 0.8 and d = 1/40. When
+θ = 0, the system solves a classical Turing system where the base state is
+homogeneous and indicated with an ×. As the heterogeneity θ grows, so does
+the base state. A number of examples of the spatial distribution of u along the
+(red) unstable base state u⋆
+θ is displayed. In this case, the base state merges with
+an unstable heterogeneous steady state at around θ = 0.12. The blue branch is
+a stable patterned state but the dot-dash nature of this branch indicates that it
+is not obtained by continuation past a fold from the steady state but instead by
+solving the reaction-diffusion equation with θ = 0 until steady state and using
+continuation from there.
+15
+
+−0.08
+−0.06
+−0.04
+−0.02
+0.00
+0.02
+0.04
+0.06
+0.08
+θ
+0.70
+0.75
+0.80
+0.85
+0.90
+0.95
+1.00
+1.05
+1.10
+1.15
+∥u∥∞
+Branch of solutions for γ = 3.61
+Initial Solution
+Fold
+Unstable Branch
+Stable Branch
+Figure 4: Schnakenberg system bifurcation diagram for growing heterogeneity
+θ ∈ [−1, 1]. Parameters used are characteristic of narrowly defined domains
+relative to Turing pattern wavelength (γ = 3.61) with also β0 = 0.8 and d =
+1/40. When θ = 0, the system solves a classical Turing system where the base
+state is homogeneous and indicated with an ×. As the heterogeneity θ grows, so
+does the base state. A number of examples of the spatial distribution of u along
+the (red) unstable base state u⋆
+θ is displayed. Note that here the base state would
+only be defined between approximately -0.05 and 0.05. By continuing through
+each fold, we end up back at u⋆
+0. Interestingly, this closed loop contains three
+different patterned branches (blue) but not a patterned branch on approximately
+±(0.03, 0.04). It is expected that the patterned state obtained by solving the
+reaction-diffusion equation in this regime is not connected here.
+16
+
+0.5
+1.0
+1.5
+2.0
+√γ
+0.60
+0.65
+0.70
+0.75
+0.80
+0.85
+0.90
+0.95
+1.00
+β0
+a)
+5
+10
+15
+20
+√γ
+0.00
+0.05
+0.10
+0.15
+0.20
+0.25
+0.30
+0.35
+0.40
+a0
+b)
+10−8
+10−7
+10−6
+10−5
+10−4
+10−3
+10−2
+10−1
+10−4
+10−3
+10−2
+10−1
+100
+Size of continuation before fold
+Λm = 0
+Figure 5: Size of continuation before a fold θ+ for (a) the Schnakenberg model
+and (b) the Gierer-Meinhardt model as γ is varied along with (a) β0 and (b)
+a0, respectively. The size of the continuation is presented in color on the log
+scale. All of these results are given for n = 1 in the heterogeneous term in the
+respective models. Red curves are drawn on the figures to correspond with Λm =
+maxjℜ (λj(Am)) = 0 for m = 1, 2, 3 (for curves left to right on both subfigures)
+where λj(Am) are eigenvalues defined in Ssection 2.1. The background color
+of white indicates that no fold was found for these parameter sets and θ was
+allowed to grow to 1.
+17
+
+5
+10
+15
+20
+25
+√γ
+2
+4
+6
+8
+10
+n
+a)
+0
+50
+100
+150
+200
+√γ
+2
+4
+6
+8
+10
+n
+b)
+0.2
+0.4
+0.6
+0.8
+2
+4
+6
+8
+Size of continuation before fold
+Λn = 0
+Λn = 0
+Inconsistency
+Λ2n = 0
+Figure 6: Size of continuation before a fold θ+ for (a) the Schnakenberg model
+and (b) the Gierer-Meinhardt model as γ and n is varied for each model. The
+size of the continuation is presented in color.
+Setting (a) β0 = 0.8 and (b)
+a0 = 0.1 in each model respectively, Λn = maxjℜ (λj(An)) = 0 where λj(Am)
+are eigenvalues defined in Section 2.1 has two solutions.
+The solution with
+smallest γ is shown on the blue line and the other is shown on the red line. The
+background color of white indicates that no fold was found for these parameter
+sets and θ was allowed to grow to 1. In (b) the green dashed line is an overlay
+of the red line with half of the value of n for each γ. This curve surprisingly
+traces a pattern of small θ+. In (a) a red × indicates a continuation that runs
+into numerical difficulties.
+18
+
+two values, we find that Λm > 0 and thus the mth mode of the homogeneous
+problem is unstable. On these curves, ¯J0 is singular. As previously established,
+we expect on these curves that continuation is not possible. In the region shown
+in white, we found no upper bound in θ+. This region also corresponds to the
+subset of the parameter space where the associated homogeneous system is
+devoid of Turing patterning. The red curves furthest to the left correspond to
+m = 1 (corresponding to the onset of Turing instability in the eigenfunction
+cos(πx) at θ = 0). Note that our growing heterogeneity is also of this form
+(n = 1) cos(nπx) (see Equations (19) and (23)). We find that because of this
+a fold is very quick to form in the numerical continuation near the red curve
+corresponding to m = 1 but not near the onset of instability for the higher
+modes. Small θ+ is shown by darker colors in the plot.
+To investigate specifically if small θ+ is associated with m = 1 because n = 1
+we varied n in the Schnakenberg model from 1 to 10. In Fig. 6, for each n,
+holding β0 = 0.8 (a) and α0 = 0.1 (b) we plot the size of the continuation
+θ+ as γ is increased. We indicate the minimum value of γ (blue line) and the
+maximum value of γ (red line) for which Λn = 0. That is, for n = 1 the blue
+and red curves correspond to the first and second intersection of β0 = 0.8 (a)
+and α0 = 0.1 (b) with the respective red curves in Fig. 5. We see for each n,
+the size of θ+ is very small at both zeros of Λn. What is also surprising, if n is
+larger than 1 if γ is smaller than that required to make the nth mode unstable
+in the homogeneous problem, the continuation did not fold. That is, we may
+have a Turing instability in the homogeneous problem because of an instability
+in the m = 1 mode but if the heterogeneity has a higher spatial frequency, say
+n = 2, the base state may not encounter a fold readily. As the scale parameter
+γ is increased beyond the the red line, we find what appears to be noise in
+θ+ but within this noise appears to be patterns. Looking specifically at the
+Gierer-Meinhardt model in Fig. 6 (b) we see small θ+ near the value of the
+maximum γ for which Λ2n = 0. We have indicated that this is case by tracing
+the green dashed line over the expanse of small θ+. You can also see this effect
+in Fig. 5 (a) for n = 1 by looking at the left branch of the m = 2 red curve
+and seeing a noticeable dark shade. As γ increases, the magnitude that θ can
+be continued before reaching a fold tends to increase, before not reaching a fold
+at all. However, numerical instabilities are prevalent in this region, as shown
+specifically by the red × in Fig. 6 (a), so the accuracy of these results remains
+questionable. We shall look specifically at the continuation described by this
+red × in the next section. The numerical results seem to become more accurate
+as the spatial grid becomes finer, and the maximum step size in θ becomes
+smaller. Due to the computational cost of producing parameter scan results,
+the accuracy of the results is here limited.
+Numerical Issues
+The inconsistent numerical issue that occurs occasionally in our parameter
+sweeping experiments in the previous section are investigated here.
+In par-
+ticular, we investigate the red × continuation in Fig. 6 (a). In this continuation
+19
+
+0.0
+0.2
+0.4
+0.6
+0.8
+1.0
+θ
+0.8
+1.0
+1.2
+1.4
+1.6
+1.8
+∥u1∥∞
+a)
+0.08
+0.10
+0.12
+0.14
+0.16
+θ
+0.86
+0.88
+0.90
+0.92
+0.94
+0.96
+∥u1∥∞
+b)
+Small step
+Fold point
+Other branch
+Long Step
+Solution branches
+Figure 7: Plot of branches for the numerically inconsistent case highlighted in
+Fig. 6 (a) with varying maximum step size. In purple, the base state branch
+and continuation through the fold point (green dot) with very small step sizes
+is shown. In yellow, a different branch is shown and the × symbols show the
+updates in the continuation algorithm if the step size is too coarse. Plot (a)
+shows the full bifurcation diagram whilst plot (b) displays a zoomed version of
+the region enclosed in the red box to show detail near the fold point.
+a maximum step size of 10−1 was used. This is a relatively large step size, but
+since the pde2path package adaptively adjusts the step size as needed, it can
+usually make out the finer details without much increase in computational cost.
+However, in this case, the larger step size causes the solution to jump from one
+branch to another. This can be seen in bifurcation Fig. 7, where for a small step
+size, a fold is encountered early in the continuation, but for a large step size,
+the continuation jumps to a different branch. Clearly the results in this region
+are unreliable. It is not clear how small the step size must be made in order
+to avoid this occurring. It does raise an interesting question though. In this
+example, it is pretty clear that the (yellow) branch that the coarse numerical
+algorithm found does not technically satisfy the numerical continuation criteria
+for a base state. That being said, looking at the distributions on either side of
+the singularity, it is possible that the yellow branch perhaps should be consid-
+ered a base state. It remains unclear if such a suitable branch can be found in
+for other cases. However, this case hints at the possibility that there may be a
+better definition for a base state than the one presented in this manuscript (one
+which can potentially always describe a unique state for all problems).
+3.3
+Critical domain length
+The extension of the Turing instability to spatially-dependent RD systems allows
+us to distinguish between patterned states and the base states. Previously these
+solution states were often indistinguishable. This meant that analysing certain
+phenomena, such as the critical domain length, was very challenging or impossi-
+ble. Now that the Turing instability has a spatially-dependent analogue, we can
+20
+
+study such phenomena. As a proof of concept, we will study how the critical
+domain length changes as the size of the heterogeneity in a spatially-dependent
+RD system increases. The critical domain length has important physical impli-
+cations, especially in developmental scenarios. In a scenario where the domain
+is slowly growing, Turing patterns will arise only if the size of the domain is
+above the critical domain length. Therefore, assessing the impact of a spatially-
+dependent term on the critical domain length could have key implications for
+these developmental scenarios. We will attempt to investigate the change in
+the critical domain length with respect to the size of the heterogeneity for two
+different reaction terms.
+The critical domain length is encoded in a critical γ value which we will
+call γc. Denote γc,0 ∈ R+ as the critical γ value for the classical RD system,
+and γc,θ ∈ R+ as the critical γ value for the heterogeneous RD system with
+parameter θ. Further, define Lc,0 := √γc,0, Lc,θ := √γc,θ as the respective crit-
+ical domain lengths. Here we are accepting Lc = √γc to be a non-dimensional
+equivalent of the critical domain length.
+The value of γc,θ is defined by largest γ such that the base state of Equations
+(7) and (8) is stable for all γ < γc,θ, but exhibits Turing instabilities for some
+γ > γc,θ. It is infeasible to check all γ values less than some candidate value for
+γc,θ. Instead, we can rely on the fact that when γ = γc,0, Λm = 0 which can
+be calculated exactly for both the Schnakenberg model and Gierer-Meinhardt
+model.
+Instead of parameterising the base state branch with the size of the hetero-
+geneity θ only, we will also parameterise with respect to γ. In doing so, we
+are assuming that a path independence result holds. That is, the base state
+solution for some γ0 > 0 can be found by first finding the base state solution for
+another γ1 > 0, and then continuing from that base state solution with respect
+to γ to find the solution at γ1. Initially we will use γ = γc,0 to perform the
+continuation, as this is known exactly and we will assume that this is close to
+γc,θ. After finding a base state solution with the initial γ value, we perform nu-
+merical continuation with respect to γ, and continue to increasing or decreasing
+γ until finding γc,θ for a given θ. We reach the critical value γc,θ when the base
+state (with respect to γ but constant θ) undergoes a change of stability. If the
+base state found for γ = γc,0 is stable, then we will increase γ in the second
+stage continuation. Likewise, we will decrease γ if the base state is unstable.
+Determining whether a steady state solution is stable can be done using inbuilt
+methods in pde2path [15].
+We are relying on using γ = γc,0 as an initial condition for the continuation.
+However, based on recent analysis on heterogeneous RD systems, there are
+points where the system with θ = 0 is outside of the Turing region, so we still
+expect to see Turing instabilities for a sufficiently large γ [6]. If the homogeneous
+system defined by θ = 0 is outside of the Turing region, it is unclear what the
+initial γ value should be. A further investigation into resolving a method for
+finding the critical domain length in this case should be considered.
+Fig. 8 shows the critical domain length Lc for the Schnakenberg system for
+a range of θ and β0 values. The length Lc appears to be decreasing with respect
+21
+
+0.7
+0.8
+0.9
+1.0
+β0
+0.5
+0.6
+0.7
+0.8
+0.9
+1.0
+1.1
+Lc
+θ = −1/2
+θ = −1/3
+θ = −1/6
+θ = 0
+θ = 1/6
+θ = 1/3
+θ = 1/2
+0.7
+0.8
+0.9
+1.0
+β0
+−20%
+−10%
+0%
+10%
+20%
+Lc % change
+Critical domain length
+Figure 8: Critical domain lengths Lc,θ of the Schnakenberg system described in
+Section 2.2.1. The critical domain length is plotted for a range of heterogeneity
+sizes θ as a function of the parameter β0.
+0.0
+0.1
+0.2
+0.3
+a0
+3.5
+4.0
+4.5
+5.0
+5.5
+6.0
+6.5
+Lc
+θ = −1/2
+θ = −1/3
+θ = −1/6
+θ = 0
+θ = 1/6
+θ = 1/3
+θ = 1/2
+0.0
+0.1
+0.2
+0.3
+a0
+−15.0%
+−10.0%
+−5.0%
+0.0%
+5.0%
+10.0%
+15.0%
+20.0%
+Lc % change
+Critical domain length
+Figure 9: Critical domain lengths Lc,θ of the Gierer-Meinhardt system described
+in Section 2.2.2. The critical domain length is plotted for a range of heterogene-
+ity sizes θ as a function of the parameter a0.
+22
+
+0.0
+0.2
+0.4
+0.6
+0.8
+1.0
+x
+−0.2
+0.0
+0.2
+0.4
+0.6
+0.8
+1.0
+1.2
+β, η
+a)
+β0 = 0.8
+0.0
+0.2
+0.4
+0.6
+0.8
+1.0
+x
+b)
+β0 = 0.9
+Production Rates and local Turing regions
+β
+η
+Turing
+Region
+Figure 10: Production rates for the first chemical, u, and the second chemical
+v, for the Schnakenberg model of Section 2.2.1.
+Plots (a) and (b) describe
+the model with β0 = 0.8 and β0 = 0.9 respectively. Each figure also shows the
+regions where the system is locally within the classical Turing pattern-generating
+parameter space. These plots are made for θ = 1/3, meaning that we found a
+critical domain length for the system shown in (b), but not in (a). In (a), gap
+between the regions that are driving the Turing instability in the whole domain
+are further apart and it is possible that these are effectively decoupled. In this
+case, we would expect to find a critical domain length but significantly larger
+(where Turing patterns can be associated with the sub-domains which locally
+drive Turing patterns).
+to β0 and increasing with respect to θ. On the other hand, Fig. 9 shows that
+the critical domain lengths for the Gierer-Meinhardt system appears to have
+the reverse dependence on the parameter a0.
+For a given production rate, if the θ = 0 is within the Turing region, then
+we expect to have a critical domain length for every other θ value.
+This is
+because the cosine heterogeneity will cause at least one interval of the domain
+to be within the Turing region locally. Thus, for sufficiently large γ, we expect
+to see Turing patterns [6]. However, our method for finding the critical domain
+length in many of these cases fails. Most notably, the critical domain length
+could not be found for any β0 value when θ = 1/2, as seen in Fig. 8. This is
+potentially because there is a decoupling effect between two intervals which are
+locally within the Turing region. Fig. 10 shows the regions where the systems
+with θ = 1/3 and β0 = 0.8, 0.9 are locally within the Turing region. As seen
+in Fig. 8, a critical domain length could be found for β0 = 0.9, but not for
+β0 = 0.8. Although the Turing regions are larger in the case where β0 = 0.8,
+the region between the two Turing regions is also larger. This gap between the
+Turing regions could have a decoupling effect where, if the two regions are close
+23
+
+enough together, they can act as one region for the purposes of forming a Turing
+instability. That is, there is enough bleed through from one region to the other
+to support a Turing pattern, despite having a region where no Turing pattern
+can be supported in between. So in this case there would be a critical θ value
+after which γ must be significantly larger before observing Turing instabilities
+which are local to the respective Turing regions.
+4
+Conclusions
+Despite being widely applicable to various problems in science, Turing insta-
+bilities in spatially-dependent reaction-diffusion systems have yielded very little
+attention in the literature.
+One of the roadblocks to understanding the be-
+haviour of these systems is the lack of definition for Turing instabilities when
+the problem depends on the spatial coordinate. The classical definition relies on
+the existence of a uniform steady state solution, however no such steady state
+exists for spatially-dependent problems in general. In reformulating the defini-
+tion, the problem arises of distinguishing between patterned states and the base
+state. The base state in the classical case is the uniform steady state. Since
+the steady state solutions of most spatially-dependent reaction-diffusion system
+are non-uniform, it is unclear which states we should label as ‘patterned’, and
+which are labelled as a ‘base state’. In order to link the spatially-dependent
+case with the classical case, we utilise tools from continuation to gradually in-
+crease the size of the heterogeneity. That is, the spatially-dependent term (or
+heterogeneity), is parameterised such that the heterogeneity vanishes initially,
+and grows to full amplitude as the introduced parameter increases. Once at
+full amplitude, the base case solution to the reaction-diffusion equation is the
+solution found through continuation, with a full amplitude heterogeneity. This
+grounds the spatially-dependent base case to the classical base case, and allows
+us to distinguish between patterned and non-patterned states. By defining the
+base case solution through continuation, this also provides a method for finding
+the base solution using numerical continuation.
+While we have extended the definition of the Turing base state, this does
+not directly extend the definition of the Turing instability.
+Traditionally, a
+Turing instability requires the base state to be stable to constant perturbations,
+and unstable overall. The stability to constant perturbations condition is not
+relevant with a spatially-dependent base state. As such, the extension of the
+first Turing condition is not trivial even after defining the base state. So we
+discussed a few possibilities about how this condition could be extended, and
+the benefits of each possibility. Much more research can be done to analyse the
+properties of each of these definitions.
+After defining the base state for heterogeneous Turing systems, it remains
+to know whether such base states exist. We provided a variety of case studies
+showing that the existence of heterogeneous base states was not guaranteed.
+Further, we could not determine, a priori, whether base states exist for a fi-
+nite size heterogeneity. To investigate this further, two parameter scans were
+24
+
+performed. The first varied the average production rate of the first chemical,
+and the length of the domain. The second varied the form of the heterogeneity
+and the length of the domain. Both parameter scans were tested with both the
+Schnakenberg and the Gierer-Meinhardt reactions. For each set of parameters
+chosen, we measured how far the branch of solutions could be continued before
+reaching a fold bifurcation. This measures how large the heterogeneity can be
+before the Turing base state ceases to exist. The results of the parameter scans
+results reveal strong correlations with existing, fundamental theory from the
+dispersion relation. Further research into a clear link between these theories is
+needed.
+For small domain lengths, it becomes even more difficult to distinguish be-
+tween patterned and non-patterned states. This is because the wavelength of
+some patterns are often similar to the length scale of the heterogeneity. The
+new definition allows for this distinction to be made, so systems with a small
+domain length can be analysed. This new distinction allowed us to analyse how
+the critical domain length changes for heterogeneous RD systems. We numer-
+ically determined the critical domain length for a range of heterogeneity sizes,
+and a range of average production rates. This serves as a proof of concept of
+how the new definition could be applied to a new problem. This was done for
+the Schnakenberg system and a Gierer-Meinhardt system. We were able to find
+the critical domain length for a range of heterogeneity sizes and average pro-
+duction rates. In some cases, however, the method we used to find the critical
+domain length failed. It is possible that there are discontinuities in the critical
+domain length caused by a decoupling in the domain. The method should be
+further developed to account for this, in an attempt to resolve the issues with
+the method used.
+References
+[1] J. F. G. Auchmuty and G. Nicolis, Bifurcation analysis of nonlinear
+reaction-diffusion equations—I. Evolution equations and the steady state
+solutions, Bulletin of Mathematical Biology, 37 (1975), pp. 323–365, https:
+//doi.org/10.1007/bf02459519.
+[2] H. Brezis, Functional Analysis, Sobolev Spaces and Partial Differen-
+tial Equations, Springer New York, 2011, https://doi.org/10.1007/
+978-0-387-70914-7, https://doi.org/10.1007/978-0-387-70914-7.
+[3] S. N. Chow and J. K. Hale, Methods of bifurcation theory, Grundlehren
+der mathematischen Wissenschaften, Springer, New York, NY, Nov. 2011.
+[4] R. A. V. Gorder, Pattern formation from spatially heterogeneous re-
+action–diffusion systems, Philosophical Transactions of the Royal Soci-
+ety A: Mathematical, Physical and Engineering Sciences, 379 (2021),
+https://doi.org/10.1098/rsta.2021.0001.
+25
+
+[5] M. Koz´ak, E. A. Gaffney, and V. Klika, Pattern formation in
+reaction-diffusion systems with piecewise kinetic modulation: An exam-
+ple study of heterogeneous kinetics, Phys. Rev. E, 100 (2019), p. 042220,
+https://doi.org/10.1103/PhysRevE.100.042220.
+[6] A. L. Krause, V. Klika, T. E. Woolley, and E. A. Gaffney, From
+one pattern into another: analysis of Turing patterns in heterogeneous
+domains via WKBJ, Journal of The Royal Society Interface, 17 (2020),
+p. 20190621, https://doi.org/10.1098/rsif.2019.0621.
+[7] B. A. Lawson and M. B. Flegg, A mathematical model for the induction
+of the mammalian ureteric bud, Journal of Theoretical Biology, 394 (2016),
+pp. 43–56, https://doi.org/https://doi.org/10.1016/j.jtbi.2015.
+12.025.
+[8] V. M´endez, S. Fedotov, and W. Horsthemke, Reaction-transport
+systems: Mesoscopic foundations, fronts, and spatial instabilities, 2010.
+[9] K. Page, P. K. Maini, and N. A. Monk, Pattern formation in spatially
+heterogeneous Turing reaction–diffusion models, Physica D: Nonlinear Phe-
+nomena, 181 (2003), pp. 80–101, https://doi.org/https://doi.org/10.
+1016/S0167-2789(03)00068-X.
+[10] S. T. A. Pickett and M. L. Cadenasso, Landscape ecology: Spatial het-
+erogeneity in ecological systems, Science, 269 (1995), pp. 331–334, https:
+//doi.org/10.1126/science.269.5222.331, https://arxiv.org/abs/
+https://www.science.org/doi/pdf/10.1126/science.269.5222.331.
+[11] R. Sheth,
+L. Marcon,
+M. F. Bastida,
+M. Junco,
+L. Quin-
+tana,
+R.
+Dahn,
+M.
+Kmita,
+J.
+Sharpe,
+and
+M.
+A.
+Ros,
+Hox genes regulate digit patterning by controlling the wavelength of a
+Turing-type mechanism, Science, 338 (2012), pp. 1476–1480, https://
+doi.org/10.1126/science.1226804,
+https://arxiv.org/abs/https:
+//www.science.org/doi/pdf/10.1126/science.1226804.
+[12] G.-Q. Sun, M. Jusup, Z. Jin, Y. Wang, and Z. Wang, Pattern
+transitions in spatial epidemics:
+Mechanisms and emergent properties,
+Physics of Life Reviews, 19 (2016), pp. 43–73, https://doi.org/https:
+//doi.org/10.1016/j.plrev.2016.08.002.
+[13] U. Timm and A. Okubo, Diffusion-driven instability in a predator-prey
+system with time-varying diffusivities, Journal of Mathematical Biology, 30
+(1992), pp. 307–320, https://doi.org/10.1007/bf00176153.
+[14] A. M. Turing, The chemical basis of morphogenesis, Philosophical
+Transactions of the Royal Society of London. Series B, Biological Sciences,
+237
+(1952),
+pp.
+37–72,
+https://doi.org/10.1098/rstb.1952.0012,
+https://arxiv.org/abs/https://royalsocietypublishing.org/doi/
+pdf/10.1098/rstb.1952.0012.
+26
+
+[15] H. Uecker, Numerical Continuation and Bifurcation in Nonlinear PDEs,
+Society for Industrial and Applied Mathematics, Jan. 2021, https://doi.
+org/10.1137/1.9781611976618.
+27
+
diff --git a/n9E_T4oBgHgl3EQf8Byb/content/tmp_files/load_file.txt b/n9E_T4oBgHgl3EQf8Byb/content/tmp_files/load_file.txt
new file mode 100644
index 0000000000000000000000000000000000000000..632049379c2fd028c1c3062c52d8db053c1aedf7
--- /dev/null
+++ b/n9E_T4oBgHgl3EQf8Byb/content/tmp_files/load_file.txt
@@ -0,0 +1,843 @@
+filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf,len=842
+page_content='Turing pattern or system heterogeneity?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' A numerical continuation approach to assessing the role of Turing instabilities in heterogeneous reaction-diffusion systems Jacob C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Vandenberg∗ Mark B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Flegg† January 23, 2023 Abstract Turing patterns in reaction-diffusion (RD) systems have classically been studied only in RD systems which do not explicitly depend on in- dependent variables such as space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' In practise, many systems for which Turing patterning is important are not homogeneous with ideal boundary conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' In heterogeneous systems with stable steady states, the steady states are also necessarily heterogeneous which is problematic for applying the classical analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Whilst there has been some work done to extend Turing analysis to some heterogeneous systems, for many systems it is still difficult to determine if a stable patterned state is driven purely by sys- tem heterogeneity or if a Turing instability is playing a role.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' In this work, we try to define a framework which uses numerical continuation to map heterogeneous RD systems onto a sensible nearby homogeneous system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' This framework may be used for discussing the role of Turing instabili- ties in establishing patterns in heterogeneous RD systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' We study the Schnakenberg and Gierer-Meinhardt models with spatially heterogeneous production as test problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' It is shown that for sufficiently large system heterogeneity (large amplitude spatial variations in morphogen produc- tion) it is possible that Turing-patterned and base states become coinci- dent and therefore impossible to distinguish.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Other exotic behaviour is also shown to be possible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' We also study a novel scenario in which mor- phogen is produced locally at levels that could support Turing patterning but on intervals/patches which are on the scale of classical critical do- main lengths.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Without classical domain boundaries, Turing patterns are allowed to bleed through;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' an effect noted by other authors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' In this case, this phenomena effectively changes the critical domain length.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Indeed, we even note that this phenomena may also effectively couple local patches together and drive instability in this way.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' ∗School of Mathematical Sciences, Monash University, Clayton, Victoria 3800, Australia.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' †School of Mathematical Sciences, Monash University, Clayton, Victoria 3800, Australia.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 1 arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='08373v1 [math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='AP] 20 Jan 2023 1 Introduction The reaction-diffusion (RD) equation is a nonlinear partial differential equation which exhibits extraordinary diverse behavior observed particularly in the life sciences [13, 12, 10].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' It models the concentration of different species in time as they interact whilst diffusing in space relative to each other.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' The species of the system could refer to a chemical species, biological species or ecological species, amongst other possibilities [8].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Under certain conditions, solutions to the RD equation can have an insta- bility which is “driven by diffusion”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' This is called a Turing instability, which is usually defined as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Turing instabilities occur when an RD system has a spatially-uniform steady state which is unstable in the presence of diffusion, but stable in the absence of diffusion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Alan Turing’s seminal paper analyses Turing instabilities as a mechanism for explaining the emergence of spatial heterogene- ity in diffuse biological chemical systems [14].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' The reason Turing instabilities can explain this onset of heterogeneity is because they typically produce Turing patterns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Turing patterns are stable solutions to the RD equation which have large spatial oscillations, and are stationary in time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Usually diffusion has the effect of “flattening” the solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' In this case, however, diffusion is what causes the system to deviate away from uniformity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Often, RD models are spatially homogeneous in the sense that the RD PDE does not explicitly contain the spatial variable x (or t).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Typically, RD models which exhibit Turing patterning are studied as homogeneous systems to sim- plify the analysis of the PDE (finding steady states, performing linear stability analysis, demonstrating the potential for patterning etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=').' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' At the same time, most real world applications almost certainly contain spatial variation in model parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Consider, for example, the patterning and development of digits, kidneys and lungs where homogeneous models are analysed for the presence of Turing instabilities despite there being obvious spatial heterogeneity in mor- phogen production rates [7, 11].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Understanding Turing patterning in the presence of spatially heterogeneous RD PDEs is not well understood and surprisingly has received very little at- tention in the literature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Perhaps, one of the reasons for this is that Turing analysis of spatially heterogeneous RD PDEs is challenging as it is not even necessarily apparent even how Turing instabilities should be defined.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' To begin, the unstable uniform steady state required for defining the Turing instability does not exist by definition for spatially heterogeneous RD PDEs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' The analysis by Krause et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' presents a general stability theory for a hetero- geneous RD PDE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' This paper is however limited to cases where heterogeneity varies slowly almost everywhere relative to the domain size [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' In the paper, Krause et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' define a ‘base state’ solution which replaces the notion of the uni- form steady state which has been ‘flattened’ by diffusion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' The base state, which must be a stationary solution to the PDE, has certain properties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Importantly, the base state does not have spatial oscillations with periods much smaller than the inhomogeneity in the PDE (it is nice and ‘diffused’).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Aside from this defi- nition being vague, it is not clear that it should be the case if the PDE contains 2 heterogeneities which vary on the same spatial scale as the Turing patterns for the system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' This is because it is not easy to distinguish between patterned and base states if oscillations in the patterned state are on the same spatial scale as the base state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' We shall also be adopting the term ‘base state’ but attempting to find a more general approach to finding it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Another method which has been widely used in the literature is to limit the scope of the study to more specific examples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' This includes choosing specific reaction terms such that an exact solution can be computed [9, 1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' At this point, a stability analysis similar to the classical analysis can be performed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Using a linear reaction term is common [9, 5], but nonlinear reaction terms can also be considered [1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Truncated Galerkin expansions of the solution have been used to study the stability of heterogeneous problems [4, 5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' These too use specific examples to find base states analytically.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' No insight is given as to why the solutions that were found should be analogous to the uniform base state in the homogeneous case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' In this manuscript, our aim is to investigate a method which may be used to find base states for heterogeneous reaction-diffusion PDEs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' The stability of these base states may be used to define Turing patterns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' We propose a method for describing base states and apply this method to the canonical Schnaken- berg (substrate depletion) system as well as the Gierer-Meinhardt (activator- inhibitor) system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' In both of these systems we allow the production of species to vary in space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' We focus on two main curiosities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' The first deals with critical phenomena which place limitations on when a base state may be defined and the second deals with the onset of critical domain lengths for Turing instabilities in the presence of heterogeneous production.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 2 Methods The classical spatially-homogeneous dimensionless reaction-diffusion system is ∂u ∂t = D∇2u + γF(u), on Ω, (1) ∇u · n = 0, on ∂Ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' (2) Here u is a vector containing the concentration of model species/chemicals, D is a diagonal matrix of diffusion constants (with D11 = 1 providing a charac- teristic timescale for nondimensionalisation), and F is a nonlinear vector-valued function describing the possible sources and sinks of, and reactions between, the species.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' The domain Ω (which has an outward normal vector n) has been scaled through non-dimensionalisation so that the spatial scale of the system relative to that of diffusion is described by the magnitude of γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' A Turing analysis of this system begins by finding the uniform steady state solution u⋆ such that F(u⋆) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Indeed this uniform state is a solution to the model because derivatives of u⋆ (a constant) is zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Subsequently, a Turing pattern is formed when the solution u⋆, which is stable if D = 0, is unstable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 3 The uniform solution to the model u⋆ will be called the base state and in hetero- geneous problems loses its uniformity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' This is the natural, diffusion-flattened, state of the system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' We can extend the RD model to account for explicit spatial variation ∂u ∂t = div(D(x)∇u) + γF(u, x), on Ω, (3) ∇u · n = 0 on ∂Ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' (4) If we were to proceed as before, we can take u⋆(x) which satisfies F(u⋆(x), x) = 0 for all x ∈ Ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' The diffusion term div(D(x)∇u⋆(x)) is not zero in general, which would mean u⋆(x) is not a steady state solution of Equation (3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Thus, it does not make sense to analyse its stability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' So in order to extend the definition of a Turing instability, we need to find a different base state u⋆(x) which satisfies the steady state problem for Equations (3) and (4) but also should not be called a Turing pattern.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Whilst a ‘pattern’ is often defined as any stable stationary heterogeneous solution, we reserve the definition of pattern in this manuscript to describe any stationary heterogeneous state separate to the base state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' As it stands, there is no conventional way of finding or defining more gen- erally what this base state is.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' The only thing that can be said about the base state u⋆(x) is that it should be somehow sensibly analogous to the uniform base state described for the homogeneous system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' We will narrow the scope of our efforts to investigate this system to the case where heterogeneity is in the reaction term only.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Specifically, we look at systems with heterogeneous production rates of each species as we believe that this system is ubiquitous in biological application where morphogen is deferentially expressed in space but reactions between morphogens are autonomous as one might expect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Thus, the form of the RD equation that we will be analysing is as follows and splits F up into autonomous, homogeneous ˆF and heterogeneous G components.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' How this partition should be done appropriately and uniquely we will discuss here, outlining the approach that we have taken, but we will justify this approach in Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' ∂u ∂t = D∇2u + γ � ˆF(u) + G(u, x) � , on Ω, (5) ∇u · n = 0 on ∂Ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' (6) To analyse this system, we will find it useful to ‘grow’ the heterogeneous com- ponents by means of a parameter θ by defining the parameterised problem ∂u ∂t = D∇2u + γ � ˆF(u) + θG(u, x) � , on Ω, (7) ∇u · n = 0 on ∂Ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' (8) Importantly, the parameter θ in these models describe the amplitude of the heterogeneity in the system and when θ → 0 a classical system is recovered and when θ → 1 the full heterogeneous problem is recovered.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Importantly, as θ may 4 be thought of as the amplitude of the heterogeneity and easily absorbed into G, it is possible to also think of θ growing beyond 1 and simply forming part of a growing G in Equations (5) and (6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Whilst there is freedom in the choice of the partition of F in Equation (3) into G and ˆF in Equation (5), we find it appropriate to uniquely define G and ˆF for a given F in the following way.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' ˆF = 1 |Ω| � Ω F(u, x) ∂x, (9) G = F − ˆF.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' (10) This is a convenient choice when the reaction term can be decomposed into a spatially-independent coupling term and a spatially-dependent source term, resulting in the following.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' F(u, x) = ˆF(u) + G(x), where the average value of G is 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Furthermore, by using this decomposition for F, we ensure that for each θ the parameterised system (Equation (7)) adheres to the same decomposition rules whilst at the same time capturing autonomous reactions in F within ˆF and often it is these terms which are the characteristi- cally important ingredients in the Turing behaviour of the system (noting that F → ˆF as θ → 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='1 Base states In this section, we attempt to redefine the base state of a heterogeneous reaction- diffusion system as a parameterised continuation of a nearby homogeneous sys- tem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' A necessary condition on the base state of a reaction-diffusion system (Equations (7) and (8)) is that it must be a stationary solution, against which stability can be later checked.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' The base state of Equations (7) and (8) shall be labelled as u⋆ θ(x) (and sometimes as u⋆ θ(x;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' θ) to highlight dependence on the parameter θ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' We have that u⋆ θ(x) is a solution to D∇2u + γ � ˆF(u) + θG(u, x) � = 0 on Ω, (11) ∇u · n = 0 on ∂Ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' (12) Since the base state should become the uniform steady state as θ → 0, we have that u⋆ 0 ∈ RNs (where Ns is the number of species in the model) is constant in x and ˆF(u⋆ 0) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' It makes sense to represent Equations (11) and (12) as the single equation Φ(u, x, ¯x;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' θ) = � D∇2u(x) + γ � ˆF(u(x)) + θG(u(x), x) � , ∇u(¯x) · n �⊤ = 0, (13) 5 where x ∈ Ω and ¯x ∈ ∂Ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' In order to label a solution to 13 as a base solution, we will further require that it varies continuously with respect to θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' In this way, the base states of the system are tied, via continuation of the parameter θ, to the base state u⋆ 0 (uniform steady state) of an associated homogeneous system (as θ → 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' To ensure the existence of u⋆ θ for some θ ̸= 0, we can find some η > 0 and u⋆ θ : (−η, η) → C2(Ω, RNs) such that u⋆ θ uniquely solves 13 and u⋆ 0 solves ˆF(u⋆ 0) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' The value η provides a region where any −η ≤ θ ≤ η is guaranteed to have a base state solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Outside of (−η, η), the amplitude of the heterogeneity may become so large that it is not possible to draw a continuation from u⋆ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' We define the Jacobian ¯Jθ(u, x) = ∂Φ ∂u = (Jθ(u, x), n · ∇)⊤ (14) = � D∇2 + γ � jˆF(u) + θjG(u, x) � , n · ∇ �⊤ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' (15) Here jˆF(u) and jG(u, x) are the Jacobians of ˆF and G respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' For continuity and uniqueness of u⋆ θ in θ at θ = 0 by the Implicit Function Theorem (IFT) [3], we require ¯J0 to reversible at u⋆ 0 and therefore, we require that jˆF(u⋆ 0) is nonsingular.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Singularity in ¯Jθ allows for the possibility that θ may become too large in magnitude for there to exist a defined base state u⋆ θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' It’s unclear in general how large a heterogeneity (θ) can get before the base state either stops existing or is not unique, or even if the base state is even bound in this way at all.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Defining the base state outside of some potential maximum range θ− < θ < θ+, is problematic and in our framework not (yet) possible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' The values of θ− and θ+ coincide with folds in the solution to Equations (11) and (12) characterised by singularities in ¯Jθ− and ¯Jθ+.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Definition 1 (Spatially-dependent Turing base state).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' For each u0 ∈ RNs such that ˆF(u0) = 0 we define the associated spatially-dependent Turing base state (or just base state) for Equation (5) as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' If there exists u⋆ θ(x;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' θ) ∈ C1(Ω × (0, 1], RNs) which is a steady state solution to Equation (7) for all θ ∈ (0, 1] and where u⋆ 0(x;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 0) = u0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Then u⋆ 1(x) is a Turing base state to the spatially- dependent RD system (Equations (5) and (6)) associated with the uniform base state u⋆ 0(x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Defining the base state in this way is a natural extension of the classical homogeneous case, since the heterogeneous base state should not deviate too far from the uniform one in the situation where the amplitude of the heterogeneity in the system is small.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' In other words, if heterogeneity in the system is small, we would expect that the base state should be almost ‘flat’ from diffusion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' As an important note, we have chosen to define ˆF and G using Equations (9) and (10), in doing so we ensure that all autonomous terms in F (for example reaction kinetics between species which drive Turing instabilities) are encapsu- lated in ˆF.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Clearly, it is possible to simply define G = F and ˆF = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' With 6 this choice, we immediately see that jˆF(u⋆ 0) is singular and continuation to the heterogeneous base state is impossible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' In the case where ˆF ̸= 0, we have ¯J0(u⋆ 0, x) = ∂Φ ∂u ���� u=u⋆ 0,θ=0 = � D∇2 + γJˆF(u⋆ 0), n · ∇ �⊤ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' We apply this to cj ˆwm, where cj ∈ RNs is the jth eigenvector of Am = −Dk2 m + γJˆF(u⋆ 0) and ˆwm is the eigenfunction solving ∇2 ˆwm = −k2 m ˆwm on Ω with ∇ ˆwm· n = 0 on ∂Ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' This gives us the following, � D∇2cj ˆwm + γJˆFcj ˆwm, n · ∇cj ˆwm �⊤ = (Amcj ˆwm, 0)⊤ = (λj(Am), 0)⊤ cj ˆwm, where λj(Am) is the eigenvalue associated with the eigenvector cj.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' This eigen- value determines the stability of the eigenvector cj ˆwm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' So if any eigenvector cj has a corresponding λj(Am) = 0, the operator ¯J0 will not be invertible and the conditions for the IFT would not be satisfied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' The continuation of base states from θ = 0 cannot proceed unless G(u⋆ 0, x) is orthogonal to every eigenvector in the null space of the adjoint operator ∂Φ ∂u ∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' That is, � Ω G(u⋆ 0, x)⊤v dx = 0, ∀v ∈ null � D∇2 + γJ⊤ ˆF � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' This is a result of Fredholm’s alternative [2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' This solvability condition is not guaranteed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' So for any chosen parameterisation, there may still be cases where continuation is impossible about θ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' We have chosen to multiply the heterogeneity G by a parameter θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Of course, this parameterisation of heterogeneity (θ = 1) from the associated homogeneous system (θ = 0) is not unique.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' In Equations (7) and (8) we increase the size of the heterogeneity linearly with the parameter θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' A more general parameterisation could be ∂u ∂t = D∇2u + γˆF(u) + γG(u, x;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' θ), (16) provided that ˆF(u) + G(u, x;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 1) ≡ F(u, x), and G(u, x;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 0) ≡ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' The IFT only provides information about the existence and uniqueness of the base state solution branch locally.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' The existence and uniqueness of the base state solution at θ = 1 is unknown a priori.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' In particular, it is unknown whether changing the parameterisation of G will lead to a change in the base state or the existence of the base state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' For this, a global homotopy result would be required.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' The analysis by Krause et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' gives general stability theory for a large perturbation in the limit as γ approaches ∞ [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' However, little attention is given on redefining the base state for the Turing instability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' The analysis assumes that 7 a steady state solution to the full RD equation (Equations (3) and (4)) exists, and that this solution has certain properties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' The first property is that the solution does not have spatial oscillations on the scale O(1/ϵ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' This is an a posteriori assumption, since no method is provided for determining whether the base state u⋆(x) has O(1/ϵ) oscillations without first finding u⋆(x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Since the heterogeneous RD equation is nonlinear in general, finding such a solution is non-trivial.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Finally, it is assumed that F satisfies the boundary conditions ∂u ∂x = 0 at x = 0, 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='2 Case studies In our numerical investigation, we focus attention on two popular models;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' the Schnakenberg model and the Gierer-Meinhardt model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' In their standard ho- mogeneous forms, the Schnakenberg model is widely studied as a substrate de- pletion Turing system whilst the Gierer-Meinhardt model is a typical activator- inhibitor Turing system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' In both of these cases we consider only one-dimensional domains Ω ∈ (0, 1) on which to solve the PDEs and on the boundaries each of the species have no-flux conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='1 Schnakenberg model The parameterised heterogeneous Schnakenberg model we will be using is as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' ∂u ∂t = ∇2u + γ � −uv2 + β(x) � , (17) ∂v ∂t = d∇2v + γ � uv2 − v + η(x) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' (18) Here d represents the relative diffusion of the activator v compared to that of the substrate u whilst β and η are spatially dependent production rates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' We will focus on a particular form of β and η in which we parameterise the scale for both the amplitude and frequency of the production heterogeneity β(x) = β0 (1 + θ cos(nπx)) , (19) η(x) = 1 − β(x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' (20) In this way, at each position a combined dimensionless activator/substrate pro- duction of 1 is assumed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' The parameter 0 ≤ β0 ≤ 1 describes the average pro- portion of this production specific to the substrate and the parameter 0 ≤ θ ≤ 1 describes the degree of redistribution of the relative production into n periods of peaks and troughs on the domain Ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 8 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='2 Gierer-Meinhardt model The parameterised heterogeneous Gierer-Meinhardt model is given as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' ∂u ∂t = ∇2u + γ �u2 v − bu + a(x) � (21) ∂v ∂t = d∇2v + γ � u2 − v � , (22) This model is controlled by the heterogeneous production rate a(x) of the acti- vator u.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' We will use a periodic heterogeneity of the form a(x) = a0 (1 + θ cos(nπx)) , where a0 ∈ R is the average production rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='3 Numerical methods To generate numerical results we use the numerical continuation method pre- sented by Uecker [15] to find solutions of Equations (11) and (12) and by starting at u⋆ 0 we find base states for the heterogeneous problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' We begin with the statement that Φ(u, x, ¯x;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' θ) = 0 (u must be a solution to Equations (11) and (12)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' By differentiating with respect to θ, 0 = ∂Φ ∂u ∂u ∂θ + ∂Φ ∂θ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' So long as ∂Φ ∂u is nonsingular then ∂θu can be estimated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' As such, finding the base states (and other steady states of the reaction diffusion system) can easily be found using by starting at θ = 0 and incrementing up θ using a forward Euler approach uθ+∆θ = uθ + ∂uθ ∂θ ∆θ, (23) = uθ − �∂Φθ ∂u �−1 ∂Φθ ∂θ ∆θ (24) where subscripts indicated the value of θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' The solution generated by Equation (24) is then corrected to reduce error.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' This is done by setting uθ+∆θ as the initial seed of a Newton solver for the problem Φ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' We did not find it necessary to use more advanced techniques in increasing θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' It is possible to skip the approximate update Equation (24) and simply use a nonlinear solver on Φ = 0 in the vicinity of uθ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' This is, however, not a good idea since it significantly increases computational time in the nonlinear solver and can sometimes even result in the nonlinear solver finding instead a different steady state solution (of which there may be many).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' In any case, we make use of the pde2path package which implements this routine.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Finally, pde2path determines stability by looking at the sign of the largest real component of the eigenvalues of the LHS of the PDE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 9 In the next section we explore numerical results which give insight into the behaviour of Turing systems with heterogeneous production rates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' We first look at the characteristic behaviour of base states (Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Noting that base states often terminate for a sufficiently large value of θ with a fold bifurcation, it is clear that for some problems if a heterogeneity is large enough a base state is not defined using our definition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' We therefore have a more thorough investigation into what determines if a base state exists or not;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' what determines how large θ can be before a fold bifurcation is reached (Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Lastly, how heterogeneous production can affect critical domain lengths required for Turing patterning (Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 3 Numerical results and discussion 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='1 Continuation of steady states The first numerical results illustrate the behaviour of a Schnakenberg Turing sys- tem described in Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='1 as the heterogeneous production term is increased in amplitude by tracing the base state and patterned states through numerical continuation of the amplitude parameter θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' We will first look at some example cases to illustrate the types of branches that can be found.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' For all the following results we will use the following parameters;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' d = 1/40, β0 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='8 and n = 1 unless otherwise stated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Later we will show results for the Gierer-Meinhardt model of Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='2 where we will use the default parameters d = 20, b = 1 and a0 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='1 unless otherwise stated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' When θ = 0 these parameters are known to give a Turing instability in the base state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' The parameter γ which encodes for the domain length, amongst other things, will be varied between examples to show how the base state behaves as it varies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' In order to visualise the steady state solution branches, we will plot the maximum value on the domain of only the variable u against the parameter θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' This metric has been chosen arbitrarily in order to distinguish between solutions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' It is important to remember when interpreting these bifurcation plots that the branches are only a projection of the infinite dimensional function space onto a single scalar value for plotting purposes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Importantly, this means that when branches intersect at non-smooth intersections, it is not possible that this is a continuation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Instead, at the point of intersection each branch corresponds to completely unrelated functions (other than the fact that they share a common maximal value of u).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' In many cases, we observe that there the continuation in θ can generate base states indefinitely.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' We can also observe two main bifurcation events on the branch containing the base state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' The first of these is a fold at which the base state and the stable patterned state emerge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' The second is an example of a fold, terminating the base state, but where the Turing patterned state never bifurcates from base state (they are, instead, perfectly disconnected).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' By saying ‘patterned state’ we are implying that there is a branch corresponding to a non- homogeneous but also stable steady state (indicated in blue in each figure).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Finally, we demonstrate some exotic behaviour of the steady states under some 10 conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Base state with no limitation In the most simple case, starting with u⋆ 0 and growing the heterogeneous term by increasing θ in Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='1, no folds were found in increasing θ from 0 to 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' It is important to note that this does not mean that the base states will extend for an arbitrarily large θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' For the Schnakenberg system in Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='1, we find that this often occurs for large γ and in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 1 use the value of γ = 900.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' This corresponds with a very large domain in relation to the expected wavelength of any Turing patterns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Our value of γ corresponds to a value of ϵ ≈ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='1 × 10−3 in the paper by Krause et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' We find that in this case the base state exists by numerical continuation and furthermore that it is approximately equal to the steady state where diffusion is ignored as small which is trivial because it is clear from Equations (11) and (12) that unless θ large on the order of γ, for large γ, we simply have to leading order that u⋆ θ solves F(u) + θG(u, x) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Base state fold connected to a patterned state We observe different behaviour in the base state for non-large γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' If γ is small, but not too small as to not observe Turing patterns in the homogeneous Schnaken- berg system (due to the domain size being less than the necessary critical domain length), then we observe a critical fold in the base state solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 2, we use the value of γ = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' When θ = 0, this corresponds to the case where there is just one unstable wavenumber corrsponding to a Turing pattern with just a half period on the full domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' In this case, the branch for a patterned state merges with the branch of the base state, undergoing a fold bifurcation as seen in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' This means that the base state becomes closer and closer to a patterned state until both states are indistinguishable from each other at the fold bifurcation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' For heterogeneities with an amplitude θ beyond this fold (shown with a green dot in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 2), we are unable to objectively define a suitable base state and therefore it becomes ambiguous as to whether or not a ‘Turing’ pattern is ob- served in the solution of the reaction-diffusion problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Indeed, whilst a steady state solution to the reaction-diffusion equation is expected beyond the fold, we do not know where this solution is by numerical continuation from θ = 0 without significant work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' That is, there are other missing branches here and it remains unclear if any of these are reasonable candidates to be defined as a ‘base state’ at this stage and further work here is needed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 2, you can see the stable patterned state but also an unstable patterned state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' For θ = 0 there are at least two patterned states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' You can see these states in the bifurcation diagram as mirrored functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Interestingly, if the heterogeneity is inverted in sign (θ ∈ [−1, 0]), continuation shows a mirror image of the bifurcation diagram in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 11 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='0 θ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='6 ∥u∥∞ Branch of solutions for γ = 900 Initial Solution Unstable Branch Stable Branch Figure 1: Schnakenberg system bifurcation diagram for growing heterogeneity θ ∈ [0, 1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Parameters used are characteristic of large domains relative to Turing pattern wavelength (γ = 900) with also β0 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='8 and d = 1/40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' When θ = 0, the system solves a classical Turing system where the base state is homogeneous and indicated with an ×.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' As the heterogeneity θ grows, so does the base state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' A number of examples of the spatial distribution of u along the (red) unstable base state u⋆ θ is displayed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' In this case, the base state is allowed to grow continuously without a fold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' On the other hand, a (blue) stable Turing ‘patterned’ state branch is also shown with some displayed distributions of u.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' This is found by solving the full reaction-diffusion equation at θ = 0 and applying the numerical continuation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 12 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='06 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='08 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='10 θ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='75 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='76 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='77 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='78 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='79 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='80 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='81 ∥u∥∞ Branch of solutions for γ = 1 Initial Solution Fold Unstable Branch Stable Branch Stable branch Figure 2: Schnakenberg system bifurcation diagram for growing heterogeneity θ ∈ [0, 1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Parameters used are characteristic of small domains relative to Turing pattern wavelength (γ = 1) with also β0 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='8 and d = 1/40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' When θ = 0, the system solves a classical Turing system where the base state is homogeneous and indicated with an ×.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' As the heterogeneity θ grows, so does the base state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' A number of examples of the spatial distribution of u along the (red) unstable base state u⋆ θ is displayed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' In this case, the base state merges with the stable patterned state at around θ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='09.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' The blue branches are stable patterned states but only the solid branch can be obtained by continuing through the fold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' The dot-dash branch can be found through continuation of a fold in the base state if decreasing θ from the θ = 0 base state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 13 Base state fold not connected to a patterned state In intermediate values of γ, more curious behaviour is possible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' This is in part because these values permit multi-wavelength heterogeneous steady states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 3 we now display the bifurcation diagram for γ = 9 (analogous to a domain length increase of three-fold on the example in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' The key observation in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 3 is that whilst the base state branch also undergoes a fold bifurcation, the solution branch with which it merges is an unstable heterogeneous steady state (not a stable pattern).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' This illustrates that the base state branch can merge with another branch which is not a branch of patterned states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' In considering Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 1 where the base state seemingly continues indefinitely without folds, it is possible that a fold is present in a similar way to how it appears in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 3 but at sufficiently large values of θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' If this is the case, our observations might suggest that as γ gets very large, so to does the values of θ where base state folding first occurs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Exotic behavior While the previous examples show two branches originating at θ = 0 converg- ing, this does not capture all possibilities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' In a more bizarre scenario, we can consider the case where γ = 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 4, the system undergoes many folds before merging with another solution branch which contains θ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Furthermore, there are stable steady states which are only present for a discrete range of θ values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' To demonstrate the behaviour and the way it closes itself, it was necessary to continue in both the positive and negative θ direction from u⋆ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='2 Base state existence In order to have a discussion about Turing patterns, it is important for a base state to exist.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' It is therefore critical to explore what determines θ+, the maxi- mum size that θ can take before a critical point such as a fold is encountered.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' To accomplish this we performed parameter scans on both the Schnakenberg and Gierer-Meinhardt model from Sections 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='1 and 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Our immediate ob- servation from doing these scans is that fold bifurcations are very common.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' In particular, we observed more folds when the spatially-dependent source term G(u, x) varies explicitly in space with frequencies similar to that of unstable eigenvectors in the dispersion relation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 5 we look at θ+ for the Schnakenberg model (a) and the Gierer- Meinhardt model (b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 5 (a) we plot θ+ as the scale parameter γ and the parameter β0 in the Schnakenberg model are varied, whilst in (b) we instead vary the parameter α0 in the Gierer-Meinhardt model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' In both cases, we have plotted, in red, the curves that relate to eigenvalues Λm = maxjℜ (λj(Am)) = 0 for m = 1, 2, 3 (for curves left to right).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' We note that in our test problems we do not have strictly imaginary eigenvalues so along these curves ¯J0 is singular and we expect that θ+ is not finite.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' For each constant β0 (or α0) we see that Λm = 0 at most twice because solving Λm = 0 requires solving a quadratic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Between the 14 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='06 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='08 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='12 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='14 θ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='80 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='85 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='90 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='95 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='00 ∥u∥∞ Branch of solutions for γ = 9 Initial Solution Fold Unstable Branch Stable Branch Figure 3: Schnakenberg system bifurcation diagram for growing heterogeneity θ ∈ [0, 1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Parameters used are characteristic of intermediate domains relative to Turing pattern wavelength (γ = 9) with also β0 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='8 and d = 1/40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' When θ = 0, the system solves a classical Turing system where the base state is homogeneous and indicated with an ×.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' As the heterogeneity θ grows, so does the base state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' A number of examples of the spatial distribution of u along the (red) unstable base state u⋆ θ is displayed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' In this case, the base state merges with an unstable heterogeneous steady state at around θ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' The blue branch is a stable patterned state but the dot-dash nature of this branch indicates that it is not obtained by continuation past a fold from the steady state but instead by solving the reaction-diffusion equation with θ = 0 until steady state and using continuation from there.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 15 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='08 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='06 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='04 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='06 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='08 θ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='70 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='75 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='80 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='85 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='90 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='95 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='00 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='05 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='10 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='15 ∥u∥∞ Branch of solutions for γ = 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='61 Initial Solution Fold Unstable Branch Stable Branch Figure 4: Schnakenberg system bifurcation diagram for growing heterogeneity θ ∈ [−1, 1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Parameters used are characteristic of narrowly defined domains relative to Turing pattern wavelength (γ = 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='61) with also β0 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='8 and d = 1/40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' When θ = 0, the system solves a classical Turing system where the base state is homogeneous and indicated with an ×.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' As the heterogeneity θ grows, so does the base state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' A number of examples of the spatial distribution of u along the (red) unstable base state u⋆ θ is displayed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Note that here the base state would only be defined between approximately -0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='05 and 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='05.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' By continuing through each fold, we end up back at u⋆ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Interestingly, this closed loop contains three different patterned branches (blue) but not a patterned branch on approximately ±(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='03, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='04).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' It is expected that the patterned state obtained by solving the reaction-diffusion equation in this regime is not connected here.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 16 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='0 √γ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='60 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='65 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='70 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='75 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='80 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='85 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='90 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='95 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='00 β0 a) 5 10 15 20 √γ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='15 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='30 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='35 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='40 a0 b) 10−8 10−7 10−6 10−5 10−4 10−3 10−2 10−1 10−4 10−3 10−2 10−1 100 Size of continuation before fold Λm = 0 Figure 5: Size of continuation before a fold θ+ for (a) the Schnakenberg model and (b) the Gierer-Meinhardt model as γ is varied along with (a) β0 and (b) a0, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' The size of the continuation is presented in color on the log scale.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' All of these results are given for n = 1 in the heterogeneous term in the respective models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Red curves are drawn on the figures to correspond with Λm = maxjℜ (λj(Am)) = 0 for m = 1, 2, 3 (for curves left to right on both subfigures) where λj(Am) are eigenvalues defined in Ssection 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' The background color of white indicates that no fold was found for these parameter sets and θ was allowed to grow to 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 17 5 10 15 20 25 √γ 2 4 6 8 10 n a) 0 50 100 150 200 √γ 2 4 6 8 10 n b) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='8 2 4 6 8 Size of continuation before fold Λn = 0 Λn = 0 Inconsistency Λ2n = 0 Figure 6: Size of continuation before a fold θ+ for (a) the Schnakenberg model and (b) the Gierer-Meinhardt model as γ and n is varied for each model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' The size of the continuation is presented in color.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Setting (a) β0 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='8 and (b) a0 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='1 in each model respectively, Λn = maxjℜ (λj(An)) = 0 where λj(Am) are eigenvalues defined in Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='1 has two solutions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' The solution with smallest γ is shown on the blue line and the other is shown on the red line.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' The background color of white indicates that no fold was found for these parameter sets and θ was allowed to grow to 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' In (b) the green dashed line is an overlay of the red line with half of the value of n for each γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' This curve surprisingly traces a pattern of small θ+.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' In (a) a red × indicates a continuation that runs into numerical difficulties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 18 two values, we find that Λm > 0 and thus the mth mode of the homogeneous problem is unstable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' On these curves, ¯J0 is singular.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' As previously established, we expect on these curves that continuation is not possible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' In the region shown in white, we found no upper bound in θ+.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' This region also corresponds to the subset of the parameter space where the associated homogeneous system is devoid of Turing patterning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' The red curves furthest to the left correspond to m = 1 (corresponding to the onset of Turing instability in the eigenfunction cos(πx) at θ = 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Note that our growing heterogeneity is also of this form (n = 1) cos(nπx) (see Equations (19) and (23)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' We find that because of this a fold is very quick to form in the numerical continuation near the red curve corresponding to m = 1 but not near the onset of instability for the higher modes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Small θ+ is shown by darker colors in the plot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' To investigate specifically if small θ+ is associated with m = 1 because n = 1 we varied n in the Schnakenberg model from 1 to 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 6, for each n, holding β0 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='8 (a) and α0 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='1 (b) we plot the size of the continuation θ+ as γ is increased.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' We indicate the minimum value of γ (blue line) and the maximum value of γ (red line) for which Λn = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' That is, for n = 1 the blue and red curves correspond to the first and second intersection of β0 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='8 (a) and α0 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='1 (b) with the respective red curves in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' We see for each n, the size of θ+ is very small at both zeros of Λn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' What is also surprising, if n is larger than 1 if γ is smaller than that required to make the nth mode unstable in the homogeneous problem, the continuation did not fold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' That is, we may have a Turing instability in the homogeneous problem because of an instability in the m = 1 mode but if the heterogeneity has a higher spatial frequency, say n = 2, the base state may not encounter a fold readily.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' As the scale parameter γ is increased beyond the the red line, we find what appears to be noise in θ+ but within this noise appears to be patterns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Looking specifically at the Gierer-Meinhardt model in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 6 (b) we see small θ+ near the value of the maximum γ for which Λ2n = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' We have indicated that this is case by tracing the green dashed line over the expanse of small θ+.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' You can also see this effect in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 5 (a) for n = 1 by looking at the left branch of the m = 2 red curve and seeing a noticeable dark shade.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' As γ increases, the magnitude that θ can be continued before reaching a fold tends to increase, before not reaching a fold at all.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' However, numerical instabilities are prevalent in this region, as shown specifically by the red × in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 6 (a), so the accuracy of these results remains questionable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' We shall look specifically at the continuation described by this red × in the next section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' The numerical results seem to become more accurate as the spatial grid becomes finer, and the maximum step size in θ becomes smaller.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Due to the computational cost of producing parameter scan results, the accuracy of the results is here limited.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Numerical Issues The inconsistent numerical issue that occurs occasionally in our parameter sweeping experiments in the previous section are investigated here.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' In par- ticular, we investigate the red × continuation in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 6 (a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' In this continuation 19 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='0 θ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='6 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='8 ∥u1∥∞ a) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='08 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='12 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='14 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='16 θ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='86 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='88 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='90 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='92 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='94 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='96 ∥u1∥∞ b) Small step Fold point Other branch Long Step Solution branches Figure 7: Plot of branches for the numerically inconsistent case highlighted in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 6 (a) with varying maximum step size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' In purple, the base state branch and continuation through the fold point (green dot) with very small step sizes is shown.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' In yellow, a different branch is shown and the × symbols show the updates in the continuation algorithm if the step size is too coarse.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Plot (a) shows the full bifurcation diagram whilst plot (b) displays a zoomed version of the region enclosed in the red box to show detail near the fold point.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' a maximum step size of 10−1 was used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' This is a relatively large step size, but since the pde2path package adaptively adjusts the step size as needed, it can usually make out the finer details without much increase in computational cost.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' However, in this case, the larger step size causes the solution to jump from one branch to another.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' This can be seen in bifurcation Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 7, where for a small step size, a fold is encountered early in the continuation, but for a large step size, the continuation jumps to a different branch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Clearly the results in this region are unreliable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' It is not clear how small the step size must be made in order to avoid this occurring.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' It does raise an interesting question though.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' In this example, it is pretty clear that the (yellow) branch that the coarse numerical algorithm found does not technically satisfy the numerical continuation criteria for a base state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' That being said, looking at the distributions on either side of the singularity, it is possible that the yellow branch perhaps should be consid- ered a base state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' It remains unclear if such a suitable branch can be found in for other cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' However, this case hints at the possibility that there may be a better definition for a base state than the one presented in this manuscript (one which can potentially always describe a unique state for all problems).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='3 Critical domain length The extension of the Turing instability to spatially-dependent RD systems allows us to distinguish between patterned states and the base states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Previously these solution states were often indistinguishable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' This meant that analysing certain phenomena, such as the critical domain length, was very challenging or impossi- ble.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Now that the Turing instability has a spatially-dependent analogue, we can 20 study such phenomena.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' As a proof of concept, we will study how the critical domain length changes as the size of the heterogeneity in a spatially-dependent RD system increases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' The critical domain length has important physical impli- cations, especially in developmental scenarios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' In a scenario where the domain is slowly growing, Turing patterns will arise only if the size of the domain is above the critical domain length.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Therefore, assessing the impact of a spatially- dependent term on the critical domain length could have key implications for these developmental scenarios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' We will attempt to investigate the change in the critical domain length with respect to the size of the heterogeneity for two different reaction terms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' The critical domain length is encoded in a critical γ value which we will call γc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Denote γc,0 ∈ R+ as the critical γ value for the classical RD system, and γc,θ ∈ R+ as the critical γ value for the heterogeneous RD system with parameter θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Further, define Lc,0 := √γc,0, Lc,θ := √γc,θ as the respective crit- ical domain lengths.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Here we are accepting Lc = √γc to be a non-dimensional equivalent of the critical domain length.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' The value of γc,θ is defined by largest γ such that the base state of Equations (7) and (8) is stable for all γ < γc,θ, but exhibits Turing instabilities for some γ > γc,θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' It is infeasible to check all γ values less than some candidate value for γc,θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Instead, we can rely on the fact that when γ = γc,0, Λm = 0 which can be calculated exactly for both the Schnakenberg model and Gierer-Meinhardt model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Instead of parameterising the base state branch with the size of the hetero- geneity θ only, we will also parameterise with respect to γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' In doing so, we are assuming that a path independence result holds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' That is, the base state solution for some γ0 > 0 can be found by first finding the base state solution for another γ1 > 0, and then continuing from that base state solution with respect to γ to find the solution at γ1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Initially we will use γ = γc,0 to perform the continuation, as this is known exactly and we will assume that this is close to γc,θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' After finding a base state solution with the initial γ value, we perform nu- merical continuation with respect to γ, and continue to increasing or decreasing γ until finding γc,θ for a given θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' We reach the critical value γc,θ when the base state (with respect to γ but constant θ) undergoes a change of stability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' If the base state found for γ = γc,0 is stable, then we will increase γ in the second stage continuation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Likewise, we will decrease γ if the base state is unstable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Determining whether a steady state solution is stable can be done using inbuilt methods in pde2path [15].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' We are relying on using γ = γc,0 as an initial condition for the continuation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' However, based on recent analysis on heterogeneous RD systems, there are points where the system with θ = 0 is outside of the Turing region, so we still expect to see Turing instabilities for a sufficiently large γ [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' If the homogeneous system defined by θ = 0 is outside of the Turing region, it is unclear what the initial γ value should be.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' A further investigation into resolving a method for finding the critical domain length in this case should be considered.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 8 shows the critical domain length Lc for the Schnakenberg system for a range of θ and β0 values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' The length Lc appears to be decreasing with respect 21 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='9 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='0 β0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='9 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='1 Lc θ = −1/2 θ = −1/3 θ = −1/6 θ = 0 θ = 1/6 θ = 1/3 θ = 1/2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='9 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='0 β0 −20% −10% 0% 10% 20% Lc % change Critical domain length Figure 8: Critical domain lengths Lc,θ of the Schnakenberg system described in Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' The critical domain length is plotted for a range of heterogeneity sizes θ as a function of the parameter β0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='3 a0 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='5 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='0 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='5 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='0 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='5 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='0 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='5 Lc θ = −1/2 θ = −1/3 θ = −1/6 θ = 0 θ = 1/6 θ = 1/3 θ = 1/2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='3 a0 −15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='0% −10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='0% −5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='0% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='0% 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='0% 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='0% 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='0% 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='0% Lc % change Critical domain length Figure 9: Critical domain lengths Lc,θ of the Gierer-Meinhardt system described in Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' The critical domain length is plotted for a range of heterogene- ity sizes θ as a function of the parameter a0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 22 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='0 x −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='2 β, η a) β0 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='0 x b) β0 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='9 Production Rates and local Turing regions β η Turing Region Figure 10: Production rates for the first chemical, u, and the second chemical v, for the Schnakenberg model of Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Plots (a) and (b) describe the model with β0 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='8 and β0 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='9 respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Each figure also shows the regions where the system is locally within the classical Turing pattern-generating parameter space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' These plots are made for θ = 1/3, meaning that we found a critical domain length for the system shown in (b), but not in (a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' In (a), gap between the regions that are driving the Turing instability in the whole domain are further apart and it is possible that these are effectively decoupled.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' In this case, we would expect to find a critical domain length but significantly larger (where Turing patterns can be associated with the sub-domains which locally drive Turing patterns).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' to β0 and increasing with respect to θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' On the other hand, Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 9 shows that the critical domain lengths for the Gierer-Meinhardt system appears to have the reverse dependence on the parameter a0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' For a given production rate, if the θ = 0 is within the Turing region, then we expect to have a critical domain length for every other θ value.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' This is because the cosine heterogeneity will cause at least one interval of the domain to be within the Turing region locally.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Thus, for sufficiently large γ, we expect to see Turing patterns [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' However, our method for finding the critical domain length in many of these cases fails.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Most notably, the critical domain length could not be found for any β0 value when θ = 1/2, as seen in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' This is potentially because there is a decoupling effect between two intervals which are locally within the Turing region.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 10 shows the regions where the systems with θ = 1/3 and β0 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='8, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='9 are locally within the Turing region.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' As seen in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 8, a critical domain length could be found for β0 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='9, but not for β0 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Although the Turing regions are larger in the case where β0 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='8, the region between the two Turing regions is also larger.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' This gap between the Turing regions could have a decoupling effect where, if the two regions are close 23 enough together, they can act as one region for the purposes of forming a Turing instability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' That is, there is enough bleed through from one region to the other to support a Turing pattern, despite having a region where no Turing pattern can be supported in between.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' So in this case there would be a critical θ value after which γ must be significantly larger before observing Turing instabilities which are local to the respective Turing regions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 4 Conclusions Despite being widely applicable to various problems in science, Turing insta- bilities in spatially-dependent reaction-diffusion systems have yielded very little attention in the literature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' One of the roadblocks to understanding the be- haviour of these systems is the lack of definition for Turing instabilities when the problem depends on the spatial coordinate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' The classical definition relies on the existence of a uniform steady state solution, however no such steady state exists for spatially-dependent problems in general.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' In reformulating the defini- tion, the problem arises of distinguishing between patterned states and the base state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' The base state in the classical case is the uniform steady state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Since the steady state solutions of most spatially-dependent reaction-diffusion system are non-uniform, it is unclear which states we should label as ‘patterned’, and which are labelled as a ‘base state’.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' In order to link the spatially-dependent case with the classical case, we utilise tools from continuation to gradually in- crease the size of the heterogeneity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' That is, the spatially-dependent term (or heterogeneity), is parameterised such that the heterogeneity vanishes initially, and grows to full amplitude as the introduced parameter increases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Once at full amplitude, the base case solution to the reaction-diffusion equation is the solution found through continuation, with a full amplitude heterogeneity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' This grounds the spatially-dependent base case to the classical base case, and allows us to distinguish between patterned and non-patterned states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' By defining the base case solution through continuation, this also provides a method for finding the base solution using numerical continuation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' While we have extended the definition of the Turing base state, this does not directly extend the definition of the Turing instability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Traditionally, a Turing instability requires the base state to be stable to constant perturbations, and unstable overall.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' The stability to constant perturbations condition is not relevant with a spatially-dependent base state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' As such, the extension of the first Turing condition is not trivial even after defining the base state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' So we discussed a few possibilities about how this condition could be extended, and the benefits of each possibility.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Much more research can be done to analyse the properties of each of these definitions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' After defining the base state for heterogeneous Turing systems, it remains to know whether such base states exist.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' We provided a variety of case studies showing that the existence of heterogeneous base states was not guaranteed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Further, we could not determine, a priori, whether base states exist for a fi- nite size heterogeneity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' To investigate this further, two parameter scans were 24 performed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' The first varied the average production rate of the first chemical, and the length of the domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' The second varied the form of the heterogeneity and the length of the domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Both parameter scans were tested with both the Schnakenberg and the Gierer-Meinhardt reactions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' For each set of parameters chosen, we measured how far the branch of solutions could be continued before reaching a fold bifurcation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' This measures how large the heterogeneity can be before the Turing base state ceases to exist.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' The results of the parameter scans results reveal strong correlations with existing, fundamental theory from the dispersion relation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Further research into a clear link between these theories is needed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' For small domain lengths, it becomes even more difficult to distinguish be- tween patterned and non-patterned states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' This is because the wavelength of some patterns are often similar to the length scale of the heterogeneity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' The new definition allows for this distinction to be made, so systems with a small domain length can be analysed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' This new distinction allowed us to analyse how the critical domain length changes for heterogeneous RD systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' We numer- ically determined the critical domain length for a range of heterogeneity sizes, and a range of average production rates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' This serves as a proof of concept of how the new definition could be applied to a new problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' This was done for the Schnakenberg system and a Gierer-Meinhardt system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' We were able to find the critical domain length for a range of heterogeneity sizes and average pro- duction rates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' In some cases, however, the method we used to find the critical domain length failed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' It is possible that there are discontinuities in the critical domain length caused by a decoupling in the domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' The method should be further developed to account for this, in an attempt to resolve the issues with the method used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' References [1] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Auchmuty and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Nicolis, Bifurcation analysis of nonlinear reaction-diffusion equations—I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Evolution equations and the steady state solutions, Bulletin of Mathematical Biology, 37 (1975), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 323–365, https: //doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='1007/bf02459519.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' [2] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Brezis, Functional Analysis, Sobolev Spaces and Partial Differen- tial Equations, Springer New York, 2011, https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='1007/ 978-0-387-70914-7, https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='1007/978-0-387-70914-7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' [3] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Chow and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Hale, Methods of bifurcation theory, Grundlehren der mathematischen Wissenschaften, Springer, New York, NY, Nov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' [4] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Gorder, Pattern formation from spatially heterogeneous re- action–diffusion systems, Philosophical Transactions of the Royal Soci- ety A: Mathematical, Physical and Engineering Sciences, 379 (2021), https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='1098/rsta.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='0001.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 25 [5] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Koz´ak, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Gaffney, and V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Klika, Pattern formation in reaction-diffusion systems with piecewise kinetic modulation: An exam- ple study of heterogeneous kinetics, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' E, 100 (2019), p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 042220, https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='1103/PhysRevE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='042220.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' [6] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Krause, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Klika, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Woolley, and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Gaffney, From one pattern into another: analysis of Turing patterns in heterogeneous domains via WKBJ, Journal of The Royal Society Interface, 17 (2020), p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 20190621, https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='1098/rsif.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='0621.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' [7] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Lawson and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Flegg, A mathematical model for the induction of the mammalian ureteric bud, Journal of Theoretical Biology, 394 (2016), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 43–56, https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='org/https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='1016/j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='jtbi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='025.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' [8] V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' M´endez, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Fedotov, and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Horsthemke, Reaction-transport systems: Mesoscopic foundations, fronts, and spatial instabilities, 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' [9] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Page, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Maini, and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Monk, Pattern formation in spatially heterogeneous Turing reaction–diffusion models, Physica D: Nonlinear Phe- nomena, 181 (2003), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 80–101, https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='org/https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 1016/S0167-2789(03)00068-X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' [10] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Pickett and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Cadenasso, Landscape ecology: Spatial het- erogeneity in ecological systems, Science, 269 (1995), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 331–334, https: //doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='1126/science.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='269.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='5222.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='331, https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='org/abs/ https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='science.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='org/doi/pdf/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='1126/science.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='269.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='5222.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='331.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' [11] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Sheth, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Marcon, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Bastida, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Junco, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Quin- tana, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Dahn, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Kmita, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Sharpe, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Ros, Hox genes regulate digit patterning by controlling the wavelength of a Turing-type mechanism, Science, 338 (2012), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 1476–1480, https:// doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='1126/science.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='1226804, https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='org/abs/https: //www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='science.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='org/doi/pdf/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='1126/science.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='1226804.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' [12] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='-Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Sun, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Jusup, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Jin, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Wang, and Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Wang, Pattern transitions in spatial epidemics: Mechanisms and emergent properties, Physics of Life Reviews, 19 (2016), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 43–73, https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='org/https: //doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='1016/j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='plrev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='08.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' [13] U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Timm and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Okubo, Diffusion-driven instability in a predator-prey system with time-varying diffusivities, Journal of Mathematical Biology, 30 (1992), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 307–320, https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='1007/bf00176153.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' [14] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Turing, The chemical basis of morphogenesis, Philosophical Transactions of the Royal Society of London.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Series B, Biological Sciences, 237 (1952), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 37–72, https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='1098/rstb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='1952.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='0012, https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='org/abs/https://royalsocietypublishing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='org/doi/ pdf/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='1098/rstb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='1952.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='0012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 26 [15] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' Uecker, Numerical Continuation and Bifurcation in Nonlinear PDEs, Society for Industrial and Applied Mathematics, Jan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 2021, https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='1137/1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content='9781611976618.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
+page_content=' 27' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'}
diff --git a/nNFIT4oBgHgl3EQfuCsf/content/tmp_files/2301.11341v1.pdf.txt b/nNFIT4oBgHgl3EQfuCsf/content/tmp_files/2301.11341v1.pdf.txt
new file mode 100644
index 0000000000000000000000000000000000000000..e874a44bb4d5aa4b924dc44be1c99f8f79a32719
--- /dev/null
+++ b/nNFIT4oBgHgl3EQfuCsf/content/tmp_files/2301.11341v1.pdf.txt
@@ -0,0 +1,1062 @@
+Entanglement Purification of Hypergraph States
+Lina Vandré and Otfried Gühne
+Naturwissenschaftlich-Technische Fakultät, Universität Siegen, Walter-Flex-Straße 3, 57068 Siegen, Germany
+(Dated: January 30, 2023)
+Entanglement purification describes a primitive in quantum information processing, where several
+copies of noisy quantum states are distilled into few copies of nearly-pure states of high quality via
+local operations and classical communication.
+Especially in the multiparticle case, the task of
+entanglement purification is complicated, as many inequivalent forms of pure state entanglement
+exist and purification protocols need to be tailored for different target states.
+In this paper we
+present optimized protocols for the purification of hypergraph states, which form a family of multi-
+qubit states that are relevant from several perspectives.
+We start by reformulating an existing
+purification protocol in a graphical language.
+This allows for systematical optimization and we
+present improvements in three directions. First, one can optimize the sequences of the protocol with
+respect to the ordering of the parties. Second, one can use adaptive schemes, where the measurement
+results obtained within the protocol are used to modify the protocols. Finally, one can improve the
+protocol with respect to the efficiency, requiring fewer copies of noisy states to reach a certain target
+state.
+I.
+INTRODUCTION
+For many tasks in quantum information processing one
+needs high-fidelity entangled states, but in practice most
+states are noisy. Purification protocols address this prob-
+lem and provide a method to transform a certain num-
+ber of copies of a noisy state into single copy with high-
+fidelity. The first protocols to purify Bell states were in-
+troduced by Bennett et al. and Deutsch et al. [1–3]. The
+concept was then further developed for different entan-
+gled states, especially in the multiparticle setting. This
+includes protocols for the purification of different kinds
+of states, such as graph states [4, 5], or W states [6], see
+also [7] for an overview.
+When analysing multiparticle entanglement, the expo-
+nentially increasing dimension of the Hilbert space ren-
+ders the discussion of arbitrary states difficult. It is there-
+fore a natural strategy to consider specific families of
+states with enable a simple description.
+Graph states
+[8] and hypergraph states [9–11] form such families of
+multi-qubit quantum states, as they can be described by
+a graphical formalism. Besides this, they found appli-
+cations in various contexts, ranging from quantum error
+correction [12, 13], measurement-based quantum compu-
+tation [14, 15], and Bell nonlocality [16–18] and state ver-
+ification and self-testing [19, 20]. Note that hypergraph
+states are a special case of the so-called locally maximally
+entangleable states [9].
+Concerning entanglement purification, the only known
+purification protocol which is valid for hypergraph states
+is formulated for LME states by Carle, Kraus, Dür, and
+de Vicente (CKDdV) [21]. In this paper we first ask how
+this protocol can be translated to the hypergraph for-
+malism. Based on this, we can then systematical develop
+improvements of the protocol.
+Our paper is organizes as follows. In Section II we in-
+troduce our notation and review hypergraph states. We
+also recall how operations like cnot and Pauli operators
+act graphically. In Section III we reformulate the CK-
+(a)
+=
+(b)
+Figure 1.
+Examples of graphs and hypergraphs.
+Figure
+(a) shows a fully connected graph, which corresponds to
+the three-qubit GHZ state.
+In the hypergraph state for-
+malism one often draws edges by circles (right) instead of
+lines as in the graph state formalism (left).
+The hyper-
+graph state corresponding to the hypergraph in the lower fig-
+ure (b) of the figure is local unitary equivalent to the state
+|H⟩ = (|000⟩ + |001⟩ + |010⟩ + |111⟩) /2.
+DdV purification protocol in a graphical manner, provid-
+ing a different language to understand it. Based on this,
+we propose systematic extensions in Section IV, which
+naturally arise from the graphical formalism. We first
+propose two approaches to make the protocol applicable
+to noisy states where the original CKDdV protocol fails.
+Later we propose a method to requiring fewer copies of
+noisy states to reach a certain target state. In Section V
+we extend the protocol to more qubits. We summarize
+and conclude in Section VI.
+II.
+HYPERGRAPH STATES
+In this section we present a short introduction to the
+class of hypergraph states and the description of transfor-
+mations between them. Readers familiar with the topic
+arXiv:2301.11341v1 [quant-ph] 26 Jan 2023
+
+2
+may directly skip to the next section.
+A.
+Definition of Hypergraph States
+A hypergraph H = (V, E) is a set V of vertices and
+hyperedges e ∈ E connecting them. Contrary to a nor-
+mal graph, the edges in a hypergraph may connect more
+than two vertices; examples of hypergraphs are given in
+Figure 1.
+Hypergraph states are multi-qubit quantum states,
+where the vertices and hyperedges of the hypergraph
+H = (V, E) represent qubits and entangling gates, re-
+spectively. The state |H⟩, corresponding to a hypergraph
+H = (V, E) is defined as
+|H⟩ =
+�
+e∈E
+Ce |+⟩⊗|V | ≡ Uph |+⟩⊗|V | ,
+(1)
+where Ce is a generalized CZ gate, acting on qubits in
+the edge e as Ce = 1e −2 |11 . . . 1⟩⟨11 . . . 1|e. If an edge
+contains only a single vertex, |e| = 1, then Ce reduces to
+the Pauli-Z operator, and for two-vertex edges Ce is just
+the standard two-qubit controlled phase gate. A detailed
+discussion on hypergraph state properties can be found
+in Refs. [22, 23].
+Similarly as for graph states, there is an alternative
+definition using so-called stabilizing operators. First, one
+can define for each vertex i a stabilizer operator,
+Si = UphXiU †
+ph,
+(2)
+where Xi denotes the first Pauli matrix acting on the i-th
+qubit and Uph denotes the collection of phase gates as in
+Eq. (1). Note that here only the gates with i ∈ e matter.
+The stabilizing operators are non-local hermitian observ-
+ables with eigenvalues ±1, they commute and generate
+an abelian group, the so-called stabilizer.
+Then, a hypergraph state may be defined as a com-
+mon eigenvector of all stabilizing operators Si. Here, one
+has to fix the eigenvalues of the Si. Often, the state de-
+fined in Equation (1) is called |H00...0⟩, as it is a common
+eigenstate of the Si with eigenvalue +1.
+By applying
+Pauli-Z gates on the state, one obtains states orthogo-
+nal to |H00...0⟩, where some of the eigenvalues are flipped
+to −1. By applying all possible combinations of Z gates,
+one obtains a basis: {|Hk⟩ = Zk |H0⟩}, where k is a bi-
+nary multi-index and Zk = �
+v∈V Zkv
+v . In this notation,
+it holds that Si |Hk⟩ = (−1)ki |Hk⟩. Hence, |Hk⟩ is an
+eigenstate of Si with eigenvalue (−1)ki. It is convenient
+to write arbitrary states in the hypergraph basis:
+ρ =
+�
+k,k′
+ck,k′ |Hk⟩⟨Hk′| .
+(3)
+Later we will purify states in this form to the state |H0⟩.
+1
+2
+3
+4
+5
+6
+1
+2
+3
+4
+5
+6
+cnot1,4
+Figure 2. Example of a cnot1,4 gate (with control qubit 1
+and target qubit 4) performed on a hypergraph state. Left:
+Hypergraph with vertex set V
+= {1, . . . , 6} and edge set
+E = {{1}, {1, 2, 3}, {3}, {4}, {4, 5, 6}}. Right: Hypergraph af-
+ter applying cnot1,4. A new edge {1, 5, 6} appeared while the
+edge {1} vanished. The effect of applying the cnot1,4 gate is
+to introduce or delete edges from the set E4 = {{1}, {1, 5, 6}}.
+The underlying rule is the following [24]: One takes the so-
+called adjacency A(4) of the target qubit t = 4, where one
+first considers all edges that contain t, but then removes t
+from it. Here, we have A(4) = {{}, {5, 6}}. Then, E4 con-
+tains all edges which are unions of edges from A(4) and the
+edge {1} of the control qubit c = 1.
+B.
+Operations on Hypergraph States
+Many operations on hypergraph states can be repre-
+sented in a graphical manner. In the following we explain
+the effect of applying Pauli gates X and Z, measuring in
+the corresponding basis σx and σz, discuss how to rep-
+resent the cnot gate graphically [24], and introduce the
+reduction operator Pv1,v2 which we will need later. Note
+that in the following for Pauli matrices we use X and Z
+to denote the corresponding unitary transformation and
+σx and σz to denote the measurements. We only discuss
+transformations that are needed in the current paper,
+an overview on other transformations can be found in
+Ref. [23].
+We have already mentioned the action of the unitary
+transformation Zv on some qubit v.
+It adds the edge
+e = {v} to the set of edges E, if it was not contained
+before, or removes it otherwise. For example applying
+Z2 and Z3 to the left hypergraph state in Figure 2 would
+add a circle at vertex 2 and remove the one at vertex 3.
+The unitary transformation Xv on a vertex v of a
+hypergraph state |H⟩ corresponding to the hypergraph
+H = (V, E) is given by
+Xv |H⟩ =
+�
+e∈E
+Ce
+�
+e′∈A(v)
+Ce′ |+⟩⊗|V | ,
+(4)
+where A(v) is the adjacency of vertex v. This is a set of
+edges defined as
+A(v) = {e − {v} | e ∈ E with v ∈ e}.
+(5)
+In words, to build the adjacency A(v) one first takes set
+of edges that contain v and the removes v from them.
+Examples of local transformations X are given in Fig-
+ure 3.
+Let us discuss now the graphical description of some
+local measurements on hypergraph states.
+In order to
+
+3
+1
+2
+3
+1
+2
+3
+1
+2
+3
+X3
+X2
+Figure 3. Application of X operators on qubits 3 and 2. We
+first apply X3 on the left graph. The adjacency of qubit 3
+is given by A(3) = {{1, 2}}.
+This new edge is shown by
+the blue dashed line in the middle graph.
+We then apply
+X2 to the middle graph. The adjacency of qubit 2 is given
+by A(2) = {{1}, {1, 3}}. These new edges are shown by the
+dotted purple lines in the right graph.
+derive the post-measurement state after measuring vertex
+v, we can expand the state |H⟩ at this vertex as
+|H⟩ =
+1
+√
+2
+�
+|0⟩v |H0⟩ ± 1
+√
+2 |1⟩v |H1⟩
+�
+,
+(6)
+where |H0⟩ and |H1⟩ are new hypergraph states with ver-
+tex set V0 = V1 = V \ v and edge sets E0 = {e ∈ E |
+v /∈ e} and E1 = E0 ∪ A(v) [23]. After measuring σz, we
+therefore either get the state |H0⟩ or |H1⟩. Measuring σx
+leads to a superposition of these two states and often the
+post-measurement state is then not a hypergraph sate
+anymore.
+In our case, we only measure σx on qubits
+which are separated from other parts of the system. that
+is where |H0⟩ = |H1⟩.
+Applying a cnotct gate on a hypergraph state H,
+where c is the control and t the target, introduces or
+deletes hyperedges of the set Et = {et ∪ c | et ∈ A(t)}.
+The new edge set after applying cnotct is given by
+E′ = E△Et,
+(7)
+where A△B = A∪B\A∩B is the symmetric difference of
+two sets. Since C2
+e = 1, double edges cancel out. There-
+fore, the operation cnotct deletes edges which are in E
+and Et and introduces edges which are only in Et. For
+example in the left part of Figure 2, the neighbourhood
+of vertex 4 is given by N(4) = {{}, {5, 6}} and therefore
+E4 = {{1}, {1, 5, 6}}.
+Finally, another operator which will be important later
+is the reduction operator Pv1,v2, which maps two qubits to
+a single qubit. In the computational basis, the reduction
+operator is written as
+Pv1,v2 = |0⟩⟨00| + |1⟩⟨11| .
+(8)
+It merges two vertices v1, v2 to one which we call v2.
+This action changes edges which contain v1 into edges
+which contain v2 and deletes edges e, e′, with e ̸= e′ but
+(e \ {v1}) = (e′ \ {v2}). The new edge set will therefore
+be
+E′ = ({e ∈ E|v1 /∈ e}△{f ∪ {v2}|f ∈ A(v1)}).
+An example is shown in Figure 4.
+III.
+THE CKDDV PURIFICATION PROTOCOL
+In this section we discuss the only known protocol
+which works for hypergraph states [21], we will refer to it
+1
+2
+3
+4
+5
+6
+1
+2
+4
+5
+6
+1
+4
+5
+6
+1
+4
+5
+6
+P3,6
+P2,5
+=
+Figure 4. Application of the reduction projector P3,6 and P2,5.
+The projector merges two vertices and its corresponding edges
+to one. In the first step, we merge vertices 3 and 6. In the
+second step we merge vertices 2 and 5. This results in two
+times the same edge, the green dashed edge {1, 5, 6} and the
+edge which was initially {1, 2, 3} and such double edges cancel
+out.
+as the CKDdV protocol. Originally, it was formulated for
+more general LME states. We first reformulate the pu-
+rification protocol in a graphical manner, which makes it
+intuitively understandable. Based on this reformulation,
+we can then propose improvements.
+In the simplest case, the aim is to purify a three-qubit
+state ρ to a pure hypergraph state, chosen to be the state
+|H0⟩ = C{123} |+⟩⊗3. The state is distributed between
+three parties, Alice, Bob, and Charlie.
+In the follow-
+ing, we explicitly describe the sub-protocol which reduces
+noise on Alice’s qubit. There are equivalent sub-protocols
+on Bob’s and Charlie’s qubits. The protocol is performed
+on two copies of a state ρ. Alice holds qubit a1 of the
+first state and qubit a2 of the second state, equivalently
+for Bob and Charlie.
+The key idea of the protocol is to induce a transforma-
+tion on the basis elements of the form
+|Hi,j,k⟩ |Hi′,j′,k′⟩ → δi,i′ |Hi,j+j′,k+k′⟩ ,
+(9)
+where δi,i′ denotes the Kronecker delta.
+This means
+that the sub-protocol compares the indices i, i′ on Al-
+ice’s qubits, and the state is discarded when i ̸= i′. This
+map drives a general state as in Eq. (3) closer to the de-
+sired hypergraph state. In detail, the sub-protocol which
+implements this transition is given by:
+Protocol 1 (CKDdV protocol).
+(0) Alice, Bob, and Charlie share two copies of a state.
+(i) Alice applies a local cnota1,a2 gate on her qubits.
+(ii) Bob and Charlie apply local reduction operators Pv1,v2
+on their qubits.
+(iii) Alice measures qubits a1 in the σx basis. She keeps
+the state, if the outcome is “+1”, and discards it other-
+wise.
+In Figure 5 it is shown how the basis elements
+|H000⟩ |Hi00⟩ transform.
+
+4
+1
+2
+3
+4
+5
+6
+1
+2
+3
+4
+5
+6
+1
+4
+5
+6
+1
+4
+5
+6
+(i) cnot1,4
+(ii) P2,5, P3,6
+=
+Figure 5.
+The CKDdV protocol, as described in in Proto-
+col 1. In the figure, the transformation of the two basis el-
+ements |H000⟩ |H100⟩ is shown. In step (i), Alice performs a
+local cnot1,4 gate. Then, Bob and Charlie apply local re-
+duction operators P2,5 and P3,6, respectively. Double edges
+cancel out, so that the green dashed line and the former edge
+{1, 2, 3} vanish. In step (iii), Alice measures qubit 1 in the
+σx basis. If there is a single-qubit edge on vertex 1, as the
+orange one in this figure, her measurement outcome will be
+“−1” and therefore the state gets discarded. If one ignores
+all orange single-qubit edges in the figure, this corresponds
+to the transformation of the basis elements |H000⟩ |H000⟩. In
+this case, Alice’s measurement outcome will be “+1” and the
+remaining state |H000⟩ is kept.
+In order to purify the full state, one needs to choose
+a sequence of sub-protocols in which these sub-protocols
+are applied on different parties. In Ref. [21], the sequence
+ABC-CAB-BCA was favoured, as it seems to perform
+better than just repeating the sequence ABC. The reason
+is that the qubit of Charlie becomes more noisy due to
+the back action from the sub-protocols purifying Alice’s
+and Bob’s qubits.
+IV.
+IMPROVING THE PROTOCOL
+PERFORMANCE
+In order to purify towards one state of a certain fidelity,
+one needs a number of input states, which depends expo-
+nentially on the number iteration, as in each run of the
+protocol a certain fraction of states is discarded. There-
+fore it is of high interest to apply the subprotocols in a
+sequence which works as efficient as possible. As already
+pointed out by Carle et al. [21], it depends on the in-
+put state which sequence is the most advantageous and
+it is not trivial to see which sequence is optimal. Carle et
+al. decided to use the sequence S = ABC − CAB − BCA
+in all their applications, since it performs well in many
+cases. In the following we will ask whether the proposed
+sequence really is the best and how we can potentially
+find better sequences.
+One should also notice that in step (ii) of the protocol a
+large fraction of states is discarded. The operator Pv1,v2
+corresponds to a positive map, which maps two qubits,
+which are in the same state, to one qubit and both qubits
+are discarded, if they are in different states. This can be
+seen as one outcome of a measurement. So, in the second
+part of this section we will ask whether one can reduce
+the amount of discarded states.
+A.
+Improved and Adaptive Sequences
+Consider a noisy three-qubit state ρ(p), where p is
+a noise parameter for some noise model, which should
+be purified to the pure hypergraph state |H000⟩⟨H000|.
+Clearly, for a fixed sequence S there is a maximal amount
+of noise until which the state can still be purified and
+there is a regime, where one cannot purify it any more.
+Interestingly, for some parameter regimes where the
+state cannot be purified, the purification protocol does
+not converge towards a state with random noise, but to-
+wards a specific state which is a mixture of two states:
+either
+1
+2(|H000⟩⟨H000| + |H001⟩⟨H001|),
+1
+2(|H000⟩⟨H000| +
+|H010⟩⟨H010|), or 1
+2(|H000⟩⟨H000|+|H100⟩⟨H100|). This ob-
+servation gives insights about how good the purification
+works on different parties. The protocol eliminates noise
+on two parties but fails on the third party. For example
+if we apply sequence S = ABC, in the cases we tested,
+there is a regime, where the state does not get purified
+but converges to 1
+2(|H000⟩⟨H000| + |H001⟩⟨H001|).
+This is consistent with the explanation given in Ref.
+[21] that the purification has an disadvantage on Charlie’s
+site. This may be explained as follows: By performing
+the protocol at one party, one aims to reduce noise on
+this party. As a unwanted side effect, one increases noise
+on the other parties. This happens because if there is
+noise on the first input state, the local reduction operator
+will “copy” it to the second state (see Equation (9)). So,
+when choosing sequence S = ABC, one increases the
+noise on Charlie’s qubit two times before purifying it the
+first time.
+How well the protocol performs on each party can be
+analysed using the measurement statistics obtained in
+step (iii) of the protocol.
+The probability to measure
+outcome “+1” in step (iii) on a qubit belonging to a cer-
+tain party gives insights, how much noise the state on
+this party has. On the perfect target state, one does not
+detect any noise and therefore measures outcome “+1”
+with probability equal to one. If one applies the protocol
+to the state 1
+2(|H000⟩⟨H000| + |H001⟩⟨H001|), however, one
+obtains outcome “+1” with a probability equal to one
+or 0.5, depending on which subprotocol was applied. If
+it was the subprotocol where Alice’s or Bob’s qubits are
+measured in step (iii), the probability i s equal to one.
+If it was the subprotocol where measure Charlie’s qubit,
+was measured the probability is 0.5. So, by evaluating
+the probabilities to measure outcome “+1” in step (iii)
+of the protocol, one can adapt the protocol on the given
+state.
+All in all, we use two approaches to find better se-
+quences.
+The first approach is to find an optimal se-
+
+5
+Ewn(ρ, p)
+Edeph(ρ, p)
+Edepo(ρ, p)
+S1 ABC-CBA-ABC ABC-CBA-CBA
+ABC-CAB-BCA
+S2 BAB-CAB-ABA CCC-ACB-CBC BBB-BCB-BBB-BAB
+⃗a
+(0.33, 0.35, 0.32) (0.35, 0.43, 0.21)
+(0.35, 0.34, 0.31)
+b
+0.35
+0.39
+0.44
+Table I. Sequences S1, S2, approximate weight vectors ⃗a, and
+bounds b for states with three kinds of noise. Explanation see
+text.
+quence, which allows a high noise tolerance and will be
+applied later without further observation of the statis-
+tics. The second approach uses two sequences where we
+switch from one to the other depending on the measure-
+ment outcomes during the process. The first approach
+helps to find sequences which are more efficient also for
+purification of states with a low noise level. The second
+approach gives a method to purify states which would
+not be purifyable otherwise.
+To find an advantageous sequence in the first approach,
+we consider input states, which are slightly too noisy to
+be purified with the standard sequence from [21].
+We
+need sufficiently many states, so that we can estimate
+the probability to measure “±1” in step (iii) of the proto-
+col. If the purification works, the probability to measure
+“−1” tends to zero. Otherwise it tends to 0.5. Knowing
+the probability at each step of the protocol, and there-
+fore on which party the purification fails, we can update
+our sequence such that the new sequence gives an advan-
+tage to the party which failed before. This process can
+be repeated until we do not find a better sequence of a
+certain length. We restricted ourselves to sequences of
+length nine. The best sequence we find in this way we
+call S1.
+With the second approach, we give a way to purify
+states which can not be purified by sequence S1 because
+their initial fidelity is slightly beyond the threshold. We
+start using sequence S1 and switch to sequence S2 de-
+pending on the measurement outcomes of step (iii). Our
+switching condition is the following: After each measure-
+ment of step (iii), we evaluate the probability to measure
+“−1” for the given party. Based on the last three proba-
+bilities associated to the same party, we take a decision to
+switch or not. For ⃗x being the vector of this three prob-
+abilities, where x3 is the newest probability, we switch, if
+the product of the vectors ⃗a⃗x exceed a bound b where ⃗a
+is a weight vector.
+To see the efficiency of our methods, we consider dif-
+ferent noise models. We analyze the influence of global
+white noise described by the channel
+Ewn(ρ, p) = pρ + 1 − p
+2n
+1,
+(10)
+where n is the number of qubits.
+In this section, the
+number of states is n = 3. We further analyse local noise
+channels given by E(ρ, p) = �n
+i=1 Ei(ρ, p), where Ei is
+pmin from [21] pmin from S1
+pmin from
+adaptive protocol
+Ewn(ρ, p)
+0.6007
+0.5878
+0.5876
+Edeph(ρ, p)
+0.8013
+0.7803
+0.7747
+Edepo(ρ, p)
+0.8136
+0.8136
+0.8132
+Table II. Noise thresholds pmin reproduced from Ref. [21],
+gained from our sequences S1 (see Table I), and for the adap-
+tive approach. In the case of Edepo(ρ, p) we found that the se-
+quence from Ref. [21] was already the best sequence of length
+9. Therefore there is no improvement of pmin in this case.
+either the dephasing channel
+Ei
+deph(ρ, p) = pρ + 1 − p
+2
+(ρ + ZiρZi)
+(11)
+or the depolarizing channel
+Ei
+depo(ρ, p) = pρ + 1 − p
+4
+(ρ + XiρXi + YiρYi + ZiρZi).
+(12)
+The sequences, weight vectors and bounds we found
+to be optimal are given in Table I. To compare the ap-
+proaches, we give the noise thresholds found in Ref. [21],
+obtained by our sequence S1, and by the adaptive ap-
+proach in Table II. The sequences we found are also bet-
+ter in other perspectives. If we apply the new sequences
+S1 nine rounds on given input states, we see that the
+output states have a higher fidelity then after purifying
+the same state nine rounds using the sequence given in
+Ref. [21].
+B.
+Recycling of Discarded States
+If one wishes to purify a state using the CKDdV pro-
+tocol one needs a high number of input states in order to
+obtain one state of a certain fidelity. Let us count how
+many states we need to have one state after applying the
+protocol once. In step (0) of the protocol, one takes two
+input states. One does not loose states by applying cnot
+in step (i). By applying the reduction operator Pv1,v2, ap-
+proximately 1
+2 of the pairs are lost. Since this operator
+is applied on two parties in step (ii), one needs approx-
+imately four pairs. In step (iii), one measures outcome
+“+1” with a probability ⩽ 1. This probability depends on
+the fidelity of the states and increases with increasing fi-
+delity. So, in total, approximately 8 = 23 input states are
+required to obtain one output state. To prepare a state
+for which we need to apply the protocol m times, we
+need more than 8m input stats. To purify, for example, a
+state of initial fidelity 0.93 to a state of fidelity of 0.994,
+we need three steps. The required number of input states
+to obtain one output state is roughly 8.73 ≈ 660. If we
+want to purify the same state to a fidelity of 0.999, which
+we reach after six steps, we need about 8.386 ≈ 346 000
+input states to get one new state.
+
+6
+It is natural to try to use the available quantum states
+more efficiently. In step (ii) of the CKDdV protocol, one
+performs a projective measurement and considers only
+one outcome, namely Pv1,v2, which we get with probabil-
+ity approximately 1
+2. We suggest to use the states which
+were discarded because we measured something differ-
+ent than Pv1,v2. The second reduction operator P ⊥
+v1,v2 is
+perpendicular to Pv1,v2 and defined as
+P ⊥
+v1,v2 = |0⟩⟨10| + |1⟩⟨01| = Pv1,v2(Xv1 ⊗ 1v2).
+(13)
+As Pv1,v2, the operator P ⊥
+v1,v2 is a positive map. It maps
+two qubits, which are in different states, to one qubit.
+This can be seen as a different measurement outcome
+than Pv1,v2, or one may interpret the set {Pv1,v2, P ⊥
+v1,v2}
+as a quantum instrument.
+In the original CKDdV protocol one keeps the state
+only after measuring Pb1,b2Pc1,c2.
+There are three
+more
+possible
+measurement
+outcomes:
+Pb1,b2P ⊥
+c1,c2,
+P ⊥
+b1,b2Pc1,c2, and P ⊥
+b1,b2P ⊥
+c1,c2. In the cases of measuring
+P ⊥
+v1,v2 on at least one party, one obtains a post measure-
+ment state on which one can apply some corrections to
+get a state, which is similar to the input state. One can
+collect these states and further purify them.
+So, one can write down a modified protocol of the CK-
+DdV protocol. Here, we give the sub-protocol which re-
+duces noise on Alice’s qubits. The sub-protocols for Bob
+and Charlie work equivalently.
+Protocol 2 (Improved CKDdV protocol).
+(0) Alice, Bob, and Charlie share two copies of a state.
+(i) Alice applies a local cnota1,a2 gate on her qubits.
+(ii) Bob and Charlie perform a measurement on their
+qubits and measure the local reduction operators Pv1,v2
+and P ⊥
+v1,v2.
+If the measurement outcome for Bob and
+Charlie was Pv1,v2, continue with step (iiia). Else, con-
+tinue with (iiib)
+(iiia) After Bob and Charlie both measured Pv1,v2, Alice
+measures qubits a1 in the σx basis. She keeps the state,
+if the outcome is “+1”, and discards it otherwise.
+(iiib) After measuring P ⊥
+v1,v2 on at least one pair of Bob
+and Charlie’s qubits, Alice measures her qubit a1 in the
+σz basis. If she measure “+1”, she keeps the state as it is.
+Otherwise, Bob and Charlie apply some local unitaries,
+which depend on the combinations of measurement out-
+comes in step (ii) and are given in Table III.
+The key idea is that output states from step (iiib) can
+be collected and further purified. In case of measuring
+P ⊥
+v1,v2 on at least one party, the protocol gives us a tran-
+sition
+|Hi,j,k⟩ |Hi′,j′,k′⟩ → |Hi′,j+j′,k+k′⟩ .
+(14)
+The resulting state has in general a lower fidelity than
+the input state. This is caused by the same reason of
+“copying” noise, as discussed before. Since in the consid-
+ered case the protocol does not reduce noise, the fidelity
+drops.
+Measurement local correction local correction
+outcomes
+Bob
+Charlie
+Pb1,b2P ⊥
+c1,c2
+Z
+1
+P ⊥
+b1,b2Pc1,c2
+1
+Z
+P ⊥
+b1,b2P ⊥
+c1,c2
+Z
+Z
+Table III. In Protocol 2 step (iiib), Alice measures her qubit
+a1 in the Z basis. If her outcome is “−1”, Bob and Charlie
+have to apply local corrections to their qubits.
+The local
+corrections depend on their measurement outcomes from step
+(ii) and are given in this table.
+The first case is shown in
+Figure 6.
+1
+2
+3
+4
+5
+6
+1
+2
+3
+4
+5
+6
+1
+4
+5
+6
+1
+4
+5
+6
+4
+5
+6
+4
+5
+6
+4
+5
+6
+(i) cnot1,4
+(ii) P2,5, P ⊥
+3,6
+=
+(iiib) σ(1)
+z
+= +1
+σ(1)
+z
+= −1
+(iiib) Z5
+Figure 6. Modified Protocol 2 for the same initial states as
+shown in Figure 5 for the case to measure Pb1,b2P ⊥
+c1,c2 in step
+(ii).
+Alice performs a σ(1)
+z -measurement on her qubit 1 of
+the state in the second raw. If she gets outcome “+1” in step
+(iiib), the resulting state is the same as the initial state (qubits
+4, 5 and 6). If she gets outcome “−1”, Bob’s qubit 5 has a
+decoration, which he needs to correct. After Bob applied a
+local Z5 unitary on qubit 5, again the resulting state is the
+same as the initial state (qubits 4, 5 and 6). Note that this
+is only the case, if there is no noise on qubit 2 and 3, as
+shown in this figure. In general one obtains the state given in
+Equation (14).
+An example for Protocol 2 is shown in Figure 6, where
+we assume the case that Bob measures P2,5 and Char-
+lie measures P ⊥
+3,6. In this case, the local correction af-
+ter measuring outcome “−1” is applying a unitary Z5 at
+qubit 5.
+Given a certain number of input states which we want
+to purify to a target fidelity, we obtain more output states
+
+7
+0.9800
+0.9825
+0.9850
+0.9875
+0.9900
+0.9925
+0.9950
+0.9975
+1.0000
+Initial Fidelity F0
+3.00
+3.25
+3.50
+3.75
+4.00
+4.25
+4.50
+4.75
+5.00
+Increase of number of output states in
+Figure 7.
+Effect of using Protocol 2 instead of the orig-
+inal CKDdV protocol.
+The input states are given by
+Ewn(|H0⟩⟨H0| , p). We first apply Protocol 1 three times and
+computed the fidelity F3 of the output states. Then, we apply
+Protocol 2 on the same input states and compare how many
+more output states of fidelity ⩾ F3 we get. The figure displays
+the increase of output states by using Protocol 2, depending
+on the fidelity F0 of the input states.
+of the desired fidelity if we follow Protocol 2 instead of
+the original CKDdV protocol. The effect in the cases w e
+tested turned out, however, to be small. As input states,
+we chose the state |H000⟩⟨H000| mixed with white noise.
+We first applied Protocol 1 three times, that is, once on
+each party, and computed the fidelity F3 of the output
+states. Then, we applied Protocol 2 on the same input
+states and compared how many more output states of fi-
+delity ⩾ F3 we get. In Figure 7 we show how much the
+number of output states increase by using Protocol 2, de-
+pending on the fidelity F0 of the input states. In the cho-
+sen cases, we get approximately 4 � more output states
+from using Protocol 2 instead of the CKDdV protocol.
+V.
+GENERALISATION TO MORE QUBITS
+The methods described here can also be applied to
+states with more qubits and different arrangement of
+edges. We restrict our attention to hypergraphs which
+are k-regular and k-colorable. A hypergraph is k-regular,
+if all edges e ∈ E have order k and it is k-colorable, if it is
+possible to color vertices of a hypergraph using k colors
+such that no two vertices of the same color share a com-
+mon edge. For example, the hypergraph states shown in
+Figures 2 and 8 are 3-colorable and 3-regular. In this
+section we discuss purification protocols to hypergraph
+states of more than 3 qubits which are 3-colorable and
+3-regular. In the following, we will denote the colors by
+A, B, and C.
+The protocols can be generalised by letting all parties
+holding qubits of color A do what was described for Al-
+pmin from
+pmin
+sequence S1
+SCKDdV
+from S1
+Ewn(ρ3, p)
+0.6007
+0.5878 ABC-CBA-ABC
+Ewn(ρ4, p)
+0.4633
+0.4396 ABC-ACB-BCA
+Ewn(ρ5, p)
+0.3901
+0.3486 ABC-ABC-CBA
+Ewn(ρ6, p)
+0.3341
+0.3017 ABC-ACB-BAC*
+Edeph(ρ3, p)
+0.8013
+0.7803 ABC-CBA-CBA
+Edeph(ρ4, p)
+0.8014
+0.7803 ABC-CBA-CBA*
+Edeph(ρ5, p)
+0.8014
+0.7803 ABC-CBA-CBA*
+Edeph(ρ6, p)
+0.8014
+0.7803 ABC-CBA-CBA*
+Edepo(ρ3, p)
+0.8137
+0.8136 ABC-CAB-BCA
+Edepo(ρ4, p)
+0.8306
+0.8122 BAC-CBA-CAB
+Edepo(ρ5, p)
+0.8358
+0.8128 ACB-BCA-CBA
+Edepo(ρ6, p)
+0.8144
+0.8121 ABC-CBA-CAB
+Table IV. Noise thresholds pmin for the sequence SCKDdV pro-
+posed in Ref. [21] and new sequences S1. The index of the
+state gives the number of qubits. In the case of Edepo(ρ3, p)
+we found that the sequence from Ref. [21] was already the
+best sequence of length 9. Therefore there is no improvement
+of pmin. When we found (non-trivially) different sequences of
+the same length, we marked them with a star (*).
+ice before. In the same way, parties holding a qubit of
+color B or C do what was described for Bob or Charlie,
+respectively. For a explicit formulation of the generalized
+protocol, see Ref. [21].
+We analysed linear three-colorable states with up to six
+qubits under the influence of global white noise, dephas-
+ing and depolarisation. That is the states to which we
+want to purify are U123U234 |+⟩⊗4, U123U234U345 |+⟩⊗5,
+and U123U234U345U456 |+⟩⊗6, as shown in Figure 8. We
+compare the noise threshold pmin for the sequence pro-
+posed in Ref. [21] with new sequences S1, found using
+methods described in Section IV A.
+Our results are shown in Table IV. One sees that in
+the case of white noise for more qubits, the differences in
+the noise threshold pmin become more significant. There-
+fore, especially in these cases it is more relevant to find
+good sequences.
+For the tested states with dephasing
+and depolarisation noise, the noise threshold is constant
+or varies slightly, respectively.
+VI.
+CONCLUSION AND OUTLOOK
+In this paper we discussed protocols for entanglement
+purification of hypergraph states. First, we reformulated
+the CKDdV protocol in a graphical language. This offers
+a new way to understand the protocol, furthermore, it al-
+lows to search for systematic extensions. Consequently,
+we introduced several improvements of the original pro-
+tocol.
+These improvements are based on different se-
+quences, adaptive schemes, as well as methods to recycle
+some of the unused states.
+While these modifications
+are conceptually interesting and can indeed improve the
+
+8
+1
+2
+3
+4
+A1
+B
+C
+A2
+1
+2
+3
+4
+5
+A1
+B1
+C
+A2
+B2
+1
+2
+3
+4
+5
+6
+A1
+B1
+C1
+A2
+B2
+C2
+Figure 8. Linear 3-colorable and 3-regular hypergraph states
+with 4, 5, and 6 qubits.
+The colors are denoted by A, B,
+and C. Note that two qubits which have the same color, for
+example qubits 1 and 4, still belong to different parties. Since
+we are restricted to local operations, we can only perform
+operations on qubits of the same party, that is in general not
+on qubits of the same color.
+performance in various examples, the amount of the im-
+provement in realistic examples seems rather modest.
+The problem of finding efficient sequences is also rel-
+evant for purification protocols for other states and was
+raised for example in Ref.
+[4] in the context of two-
+colorable graph states. The methods developed here can
+be applied to this case, but also to all purification proto-
+cols which follow the concept introduced by Bennett et
+al. [1].
+A further open question is how the effects of our meth-
+ods scale with the number of qubits. Another open ques-
+tion is whether Protocol 2 can be further improved so
+that the effect gets more significant.
+VII.
+ACKNOWLEDGMENTS
+We thank Mariami Gachechiladze, Kiara Hansenne,
+Jan L. Bönsel, and Fabian Zickgraf for discussions.
+This work was supported by the Deutsche Forschungsge-
+meinschaft (DFG, German Research Foundation, project
+numbers 447948357 and 440958198), the Sino-German
+Center for Research Promotion (Project M-0294), the
+ERC (Consolidator Grant 683107/TempoQ), the Ger-
+man Ministry of Education and Research (Project
+QuKuK, BMBF Grant No.
+16KIS1618K) and the
+Stiftung der Deutschen Wirtschaft.
+[1] C. H. Bennett, H. J. Bernstein, S. Popescu, and B. Schu-
+macher, Phys. Rev. A 53, 2046 (1996).
+[2] C. H. Bennett, D. P. DiVincenzo, J. A. Smolin, and W. K.
+Wootters, Phys. Rev. A 54, 3824 (1996).
+[3] D. Deutsch,
+A. Ekert,
+R. Jozsa,
+C. Macchiavello,
+S. Popescu, and A. Sanpera, Phys. Rev. Lett. 77, 2818
+(1996).
+[4] H. Aschauer, W. Dür, and H.-J. Briegel, Phys. Rev. A
+71, 012319 (2005).
+[5] C. Kruszynska, A. Miyake, H. J. Briegel, and W. Dür,
+Phys. Rev. A 74, 052316 (2006).
+[6] A. Miyake and H. J. Briegel, Phys. Rev. Lett. 95, 220501
+(2005).
+[7] W. Dür and H. J. Briegel, Reports on Progress in Physics
+70, 1381 (2007).
+[8] M. Hein, J. Eisert, and H. J. Briegel, Phys. Rev. A 69,
+062311 (2004).
+[9] C. Kruszynska and B. Kraus, Phys. Rev. A 79, 052304
+(2009).
+[10] R. Qu, J. Wang, Z.-s. Li, and Y.-r. Bao, Phys. Rev. A
+87, 022311 (2013).
+[11] M. Rossi, M. Huber, D. Bruß, and C. Macchiavello, New
+J. Phys. 15, 113022 (2013).
+[12] P. W. Shor, Phys. Rev. A 52, R2493 (1995).
+[13] T. Wagner, H. Kampermann, and D. Bruß, J. Phys. A
+Math. Theor. 51, 125302 (2018).
+[14] R. Raussendorf and H. J. Briegel, Phys. Rev. Lett. 86,
+5188 (2001).
+[15] M. Gachechiladze, O. Gühne, and A. Miyake, Phys. Rev.
+A 99, 052304 (2019).
+[16] V. Scarani, A. Ací n, E. Schenck, and M. Aspelmeyer,
+Phys. Rev. A 71, 042325 (2005).
+[17] O. Gühne, G. Tóth, P. Hyllus, and H. J. Briegel, Phys.
+Rev. Lett. 95, 120405 (2005).
+[18] M. Gachechiladze, C. Budroni, and O. Gühne, Phys. Rev.
+Lett. 116, 062321 (2016).
+[19] T. Morimae, Y. Takeuchi, and M. Hayashi, Phys. Rev. A
+96, 062321 (2017).
+[20] F. Baccari, R. Augusiak, I. Š upić, J. Tura, and A. Acín,
+Phys. Rev. Lett. 124, 020402 (2020).
+[21] T. Carle, B. Kraus, W. Dür, and J. I. de Vicente, Phys.
+Rev. A 87, 012328 (2013).
+[22] O. Gühne, M. Cuquet, F. E. S. Steinhoff, T. Moroder,
+M. Rossi, D. Bruß, B. Kraus, and C. Macchiavello, J.
+Phys. A Math. Theor. 47, 335303 (2014).
+[23] M. Gachechiladze, Quantum Hypergraph States and the
+Theory of Multiparticle Entanglement, Dissertation, Uni-
+versity of Siegen (2019).
+[24] M. Gachechiladze, N. Tsimakuridze, and O. Gühne, J.
+Phys. A Math. Theor. 50, 19LT01 (2017).
+
diff --git a/nNFIT4oBgHgl3EQfuCsf/content/tmp_files/load_file.txt b/nNFIT4oBgHgl3EQfuCsf/content/tmp_files/load_file.txt
new file mode 100644
index 0000000000000000000000000000000000000000..55b4d684e2d15f989d5ef9dab8ab068e4cf8ff1a
--- /dev/null
+++ b/nNFIT4oBgHgl3EQfuCsf/content/tmp_files/load_file.txt
@@ -0,0 +1,610 @@
+filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf,len=609
+page_content='Entanglement Purification of Hypergraph States Lina Vandré and Otfried Gühne Naturwissenschaftlich-Technische Fakultät, Universität Siegen, Walter-Flex-Straße 3, 57068 Siegen, Germany (Dated: January 30, 2023) Entanglement purification describes a primitive in quantum information processing, where several copies of noisy quantum states are distilled into few copies of nearly-pure states of high quality via local operations and classical communication.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Especially in the multiparticle case, the task of entanglement purification is complicated, as many inequivalent forms of pure state entanglement exist and purification protocols need to be tailored for different target states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' In this paper we present optimized protocols for the purification of hypergraph states, which form a family of multi- qubit states that are relevant from several perspectives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' We start by reformulating an existing purification protocol in a graphical language.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' This allows for systematical optimization and we present improvements in three directions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' First, one can optimize the sequences of the protocol with respect to the ordering of the parties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Second, one can use adaptive schemes, where the measurement results obtained within the protocol are used to modify the protocols.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Finally, one can improve the protocol with respect to the efficiency, requiring fewer copies of noisy states to reach a certain target state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' INTRODUCTION For many tasks in quantum information processing one needs high-fidelity entangled states, but in practice most states are noisy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Purification protocols address this prob- lem and provide a method to transform a certain num- ber of copies of a noisy state into single copy with high- fidelity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' The first protocols to purify Bell states were in- troduced by Bennett et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' and Deutsch et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' [1–3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' The concept was then further developed for different entan- gled states, especially in the multiparticle setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' This includes protocols for the purification of different kinds of states, such as graph states [4, 5], or W states [6], see also [7] for an overview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' When analysing multiparticle entanglement, the expo- nentially increasing dimension of the Hilbert space ren- ders the discussion of arbitrary states difficult.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' It is there- fore a natural strategy to consider specific families of states with enable a simple description.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Graph states [8] and hypergraph states [9–11] form such families of multi-qubit quantum states, as they can be described by a graphical formalism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Besides this, they found appli- cations in various contexts, ranging from quantum error correction [12, 13], measurement-based quantum compu- tation [14, 15], and Bell nonlocality [16–18] and state ver- ification and self-testing [19, 20].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Note that hypergraph states are a special case of the so-called locally maximally entangleable states [9].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Concerning entanglement purification, the only known purification protocol which is valid for hypergraph states is formulated for LME states by Carle, Kraus, Dür, and de Vicente (CKDdV) [21].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' In this paper we first ask how this protocol can be translated to the hypergraph for- malism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Based on this, we can then systematical develop improvements of the protocol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Our paper is organizes as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' In Section II we in- troduce our notation and review hypergraph states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' We also recall how operations like cnot and Pauli operators act graphically.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' In Section III we reformulate the CK- (a) = (b) Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Examples of graphs and hypergraphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Figure (a) shows a fully connected graph, which corresponds to the three-qubit GHZ state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' In the hypergraph state for- malism one often draws edges by circles (right) instead of lines as in the graph state formalism (left).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' The hyper- graph state corresponding to the hypergraph in the lower fig- ure (b) of the figure is local unitary equivalent to the state |H⟩ = (|000⟩ + |001⟩ + |010⟩ + |111⟩) /2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' DdV purification protocol in a graphical manner, provid- ing a different language to understand it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Based on this, we propose systematic extensions in Section IV, which naturally arise from the graphical formalism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' We first propose two approaches to make the protocol applicable to noisy states where the original CKDdV protocol fails.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Later we propose a method to requiring fewer copies of noisy states to reach a certain target state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' In Section V we extend the protocol to more qubits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' We summarize and conclude in Section VI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' HYPERGRAPH STATES In this section we present a short introduction to the class of hypergraph states and the description of transfor- mations between them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Readers familiar with the topic arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='11341v1 [quant-ph] 26 Jan 2023 2 may directly skip to the next section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Definition of Hypergraph States A hypergraph H = (V, E) is a set V of vertices and hyperedges e ∈ E connecting them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Contrary to a nor- mal graph, the edges in a hypergraph may connect more than two vertices;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' examples of hypergraphs are given in Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Hypergraph states are multi-qubit quantum states, where the vertices and hyperedges of the hypergraph H = (V, E) represent qubits and entangling gates, re- spectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' The state |H⟩, corresponding to a hypergraph H = (V, E) is defined as |H⟩ = � e∈E Ce |+⟩⊗|V | ≡ Uph |+⟩⊗|V | , (1) where Ce is a generalized CZ gate, acting on qubits in the edge e as Ce = 1e −2 |11 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' 1⟩⟨11 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' 1|e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' If an edge contains only a single vertex, |e| = 1, then Ce reduces to the Pauli-Z operator, and for two-vertex edges Ce is just the standard two-qubit controlled phase gate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' A detailed discussion on hypergraph state properties can be found in Refs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' [22, 23].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Similarly as for graph states, there is an alternative definition using so-called stabilizing operators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' First, one can define for each vertex i a stabilizer operator, Si = UphXiU † ph, (2) where Xi denotes the first Pauli matrix acting on the i-th qubit and Uph denotes the collection of phase gates as in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' (1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Note that here only the gates with i ∈ e matter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' The stabilizing operators are non-local hermitian observ- ables with eigenvalues ±1, they commute and generate an abelian group, the so-called stabilizer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Then, a hypergraph state may be defined as a com- mon eigenvector of all stabilizing operators Si.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Here, one has to fix the eigenvalues of the Si.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Often, the state de- fined in Equation (1) is called |H00.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='0⟩, as it is a common eigenstate of the Si with eigenvalue +1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' By applying Pauli-Z gates on the state, one obtains states orthogo- nal to |H00.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='0⟩, where some of the eigenvalues are flipped to −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' By applying all possible combinations of Z gates, one obtains a basis: {|Hk⟩ = Zk |H0⟩}, where k is a bi- nary multi-index and Zk = � v∈V Zkv v .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' In this notation, it holds that Si |Hk⟩ = (−1)ki |Hk⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Hence, |Hk⟩ is an eigenstate of Si with eigenvalue (−1)ki.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' It is convenient to write arbitrary states in the hypergraph basis: ρ = � k,k′ ck,k′ |Hk⟩⟨Hk′| .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' (3) Later we will purify states in this form to the state |H0⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' 1 2 3 4 5 6 1 2 3 4 5 6 cnot1,4 Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Example of a cnot1,4 gate (with control qubit 1 and target qubit 4) performed on a hypergraph state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Left: Hypergraph with vertex set V = {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' , 6} and edge set E = {{1}, {1, 2, 3}, {3}, {4}, {4, 5, 6}}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Right: Hypergraph af- ter applying cnot1,4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' A new edge {1, 5, 6} appeared while the edge {1} vanished.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' The effect of applying the cnot1,4 gate is to introduce or delete edges from the set E4 = {{1}, {1, 5, 6}}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' The underlying rule is the following [24]: One takes the so- called adjacency A(4) of the target qubit t = 4, where one first considers all edges that contain t, but then removes t from it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Here, we have A(4) = {{}, {5, 6}}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Then, E4 con- tains all edges which are unions of edges from A(4) and the edge {1} of the control qubit c = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Operations on Hypergraph States Many operations on hypergraph states can be repre- sented in a graphical manner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' In the following we explain the effect of applying Pauli gates X and Z, measuring in the corresponding basis σx and σz, discuss how to rep- resent the cnot gate graphically [24], and introduce the reduction operator Pv1,v2 which we will need later.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Note that in the following for Pauli matrices we use X and Z to denote the corresponding unitary transformation and σx and σz to denote the measurements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' We only discuss transformations that are needed in the current paper, an overview on other transformations can be found in Ref.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' [23].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' We have already mentioned the action of the unitary transformation Zv on some qubit v.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' It adds the edge e = {v} to the set of edges E, if it was not contained before, or removes it otherwise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' For example applying Z2 and Z3 to the left hypergraph state in Figure 2 would add a circle at vertex 2 and remove the one at vertex 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' The unitary transformation Xv on a vertex v of a hypergraph state |H⟩ corresponding to the hypergraph H = (V, E) is given by Xv |H⟩ = � e∈E Ce � e′∈A(v) Ce′ |+⟩⊗|V | , (4) where A(v) is the adjacency of vertex v.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' This is a set of edges defined as A(v) = {e − {v} | e ∈ E with v ∈ e}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' (5) In words, to build the adjacency A(v) one first takes set of edges that contain v and the removes v from them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Examples of local transformations X are given in Fig- ure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Let us discuss now the graphical description of some local measurements on hypergraph states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' In order to 3 1 2 3 1 2 3 1 2 3 X3 X2 Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Application of X operators on qubits 3 and 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' We first apply X3 on the left graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' The adjacency of qubit 3 is given by A(3) = {{1, 2}}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' This new edge is shown by the blue dashed line in the middle graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' We then apply X2 to the middle graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' The adjacency of qubit 2 is given by A(2) = {{1}, {1, 3}}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' These new edges are shown by the dotted purple lines in the right graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' derive the post-measurement state after measuring vertex v, we can expand the state |H⟩ at this vertex as |H⟩ = 1 √ 2 � |0⟩v |H0⟩ ± 1 √ 2 |1⟩v |H1⟩ � , (6) where |H0⟩ and |H1⟩ are new hypergraph states with ver- tex set V0 = V1 = V \\ v and edge sets E0 = {e ∈ E | v /∈ e} and E1 = E0 ∪ A(v) [23].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' After measuring σz, we therefore either get the state |H0⟩ or |H1⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Measuring σx leads to a superposition of these two states and often the post-measurement state is then not a hypergraph sate anymore.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' In our case, we only measure σx on qubits which are separated from other parts of the system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' that is where |H0⟩ = |H1⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Applying a cnotct gate on a hypergraph state H, where c is the control and t the target, introduces or deletes hyperedges of the set Et = {et ∪ c | et ∈ A(t)}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' The new edge set after applying cnotct is given by E′ = E△Et, (7) where A△B = A∪B\\A∩B is the symmetric difference of two sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Since C2 e = 1, double edges cancel out.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' There- fore, the operation cnotct deletes edges which are in E and Et and introduces edges which are only in Et.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' For example in the left part of Figure 2, the neighbourhood of vertex 4 is given by N(4) = {{}, {5, 6}} and therefore E4 = {{1}, {1, 5, 6}}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Finally, another operator which will be important later is the reduction operator Pv1,v2, which maps two qubits to a single qubit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' In the computational basis, the reduction operator is written as Pv1,v2 = |0⟩⟨00| + |1⟩⟨11| .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' (8) It merges two vertices v1, v2 to one which we call v2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' This action changes edges which contain v1 into edges which contain v2 and deletes edges e, e′, with e ̸= e′ but (e \\ {v1}) = (e′ \\ {v2}).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' The new edge set will therefore be E′ = ({e ∈ E|v1 /∈ e}△{f ∪ {v2}|f ∈ A(v1)}).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' An example is shown in Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' THE CKDDV PURIFICATION PROTOCOL In this section we discuss the only known protocol which works for hypergraph states [21], we will refer to it 1 2 3 4 5 6 1 2 4 5 6 1 4 5 6 1 4 5 6 P3,6 P2,5 = Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Application of the reduction projector P3,6 and P2,5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' The projector merges two vertices and its corresponding edges to one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' In the first step, we merge vertices 3 and 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' In the second step we merge vertices 2 and 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' This results in two times the same edge, the green dashed edge {1, 5, 6} and the edge which was initially {1, 2, 3} and such double edges cancel out.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' as the CKDdV protocol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Originally, it was formulated for more general LME states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' We first reformulate the pu- rification protocol in a graphical manner, which makes it intuitively understandable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Based on this reformulation, we can then propose improvements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' In the simplest case, the aim is to purify a three-qubit state ρ to a pure hypergraph state, chosen to be the state |H0⟩ = C{123} |+⟩⊗3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' The state is distributed between three parties, Alice, Bob, and Charlie.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' In the follow- ing, we explicitly describe the sub-protocol which reduces noise on Alice’s qubit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' There are equivalent sub-protocols on Bob’s and Charlie’s qubits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' The protocol is performed on two copies of a state ρ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Alice holds qubit a1 of the first state and qubit a2 of the second state, equivalently for Bob and Charlie.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' The key idea of the protocol is to induce a transforma- tion on the basis elements of the form |Hi,j,k⟩ |Hi′,j′,k′⟩ → δi,i′ |Hi,j+j′,k+k′⟩ , (9) where δi,i′ denotes the Kronecker delta.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' This means that the sub-protocol compares the indices i, i′ on Al- ice’s qubits, and the state is discarded when i ̸= i′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' This map drives a general state as in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' (3) closer to the de- sired hypergraph state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' In detail, the sub-protocol which implements this transition is given by: Protocol 1 (CKDdV protocol).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' (0) Alice, Bob, and Charlie share two copies of a state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' (i) Alice applies a local cnota1,a2 gate on her qubits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' (ii) Bob and Charlie apply local reduction operators Pv1,v2 on their qubits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' (iii) Alice measures qubits a1 in the σx basis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' She keeps the state, if the outcome is “+1”, and discards it other- wise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' In Figure 5 it is shown how the basis elements |H000⟩ |Hi00⟩ transform.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' 4 1 2 3 4 5 6 1 2 3 4 5 6 1 4 5 6 1 4 5 6 (i) cnot1,4 (ii) P2,5, P3,6 = Figure 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' The CKDdV protocol, as described in in Proto- col 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' In the figure, the transformation of the two basis el- ements |H000⟩ |H100⟩ is shown.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' In step (i), Alice performs a local cnot1,4 gate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Then, Bob and Charlie apply local re- duction operators P2,5 and P3,6, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Double edges cancel out, so that the green dashed line and the former edge {1, 2, 3} vanish.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' In step (iii), Alice measures qubit 1 in the σx basis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' If there is a single-qubit edge on vertex 1, as the orange one in this figure, her measurement outcome will be “−1” and therefore the state gets discarded.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' If one ignores all orange single-qubit edges in the figure, this corresponds to the transformation of the basis elements |H000⟩ |H000⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' In this case, Alice’s measurement outcome will be “+1” and the remaining state |H000⟩ is kept.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' In order to purify the full state, one needs to choose a sequence of sub-protocols in which these sub-protocols are applied on different parties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' In Ref.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' [21], the sequence ABC-CAB-BCA was favoured, as it seems to perform better than just repeating the sequence ABC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' The reason is that the qubit of Charlie becomes more noisy due to the back action from the sub-protocols purifying Alice’s and Bob’s qubits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' IMPROVING THE PROTOCOL PERFORMANCE In order to purify towards one state of a certain fidelity, one needs a number of input states, which depends expo- nentially on the number iteration, as in each run of the protocol a certain fraction of states is discarded.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' There- fore it is of high interest to apply the subprotocols in a sequence which works as efficient as possible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' As already pointed out by Carle et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' [21], it depends on the in- put state which sequence is the most advantageous and it is not trivial to see which sequence is optimal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Carle et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' decided to use the sequence S = ABC − CAB − BCA in all their applications, since it performs well in many cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' In the following we will ask whether the proposed sequence really is the best and how we can potentially find better sequences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' One should also notice that in step (ii) of the protocol a large fraction of states is discarded.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' The operator Pv1,v2 corresponds to a positive map, which maps two qubits, which are in the same state, to one qubit and both qubits are discarded, if they are in different states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' This can be seen as one outcome of a measurement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' So, in the second part of this section we will ask whether one can reduce the amount of discarded states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Improved and Adaptive Sequences Consider a noisy three-qubit state ρ(p), where p is a noise parameter for some noise model, which should be purified to the pure hypergraph state |H000⟩⟨H000|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Clearly, for a fixed sequence S there is a maximal amount of noise until which the state can still be purified and there is a regime, where one cannot purify it any more.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Interestingly, for some parameter regimes where the state cannot be purified, the purification protocol does not converge towards a state with random noise, but to- wards a specific state which is a mixture of two states: either 1 2(|H000⟩⟨H000| + |H001⟩⟨H001|), 1 2(|H000⟩⟨H000| + |H010⟩⟨H010|), or 1 2(|H000⟩⟨H000|+|H100⟩⟨H100|).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' This ob- servation gives insights about how good the purification works on different parties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' The protocol eliminates noise on two parties but fails on the third party.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' For example if we apply sequence S = ABC, in the cases we tested, there is a regime, where the state does not get purified but converges to 1 2(|H000⟩⟨H000| + |H001⟩⟨H001|).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' This is consistent with the explanation given in Ref.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' [21] that the purification has an disadvantage on Charlie’s site.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' This may be explained as follows: By performing the protocol at one party, one aims to reduce noise on this party.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' As a unwanted side effect, one increases noise on the other parties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' This happens because if there is noise on the first input state, the local reduction operator will “copy” it to the second state (see Equation (9)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' So, when choosing sequence S = ABC, one increases the noise on Charlie’s qubit two times before purifying it the first time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' How well the protocol performs on each party can be analysed using the measurement statistics obtained in step (iii) of the protocol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' The probability to measure outcome “+1” in step (iii) on a qubit belonging to a cer- tain party gives insights, how much noise the state on this party has.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' On the perfect target state, one does not detect any noise and therefore measures outcome “+1” with probability equal to one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' If one applies the protocol to the state 1 2(|H000⟩⟨H000| + |H001⟩⟨H001|), however, one obtains outcome “+1” with a probability equal to one or 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='5, depending on which subprotocol was applied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' If it was the subprotocol where Alice’s or Bob’s qubits are measured in step (iii), the probability i s equal to one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' If it was the subprotocol where measure Charlie’s qubit, was measured the probability is 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' So, by evaluating the probabilities to measure outcome “+1” in step (iii) of the protocol, one can adapt the protocol on the given state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' All in all, we use two approaches to find better se- quences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' The first approach is to find an optimal se- 5 Ewn(ρ, p) Edeph(ρ, p) Edepo(ρ, p) S1 ABC-CBA-ABC ABC-CBA-CBA ABC-CAB-BCA S2 BAB-CAB-ABA CCC-ACB-CBC BBB-BCB-BBB-BAB ⃗a (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='33, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='35, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='32) (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='35, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='43, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='21) (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='35, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='34, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='31) b 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='35 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='39 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='44 Table I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Sequences S1, S2, approximate weight vectors ⃗a, and bounds b for states with three kinds of noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Explanation see text.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' quence, which allows a high noise tolerance and will be applied later without further observation of the statis- tics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' The second approach uses two sequences where we switch from one to the other depending on the measure- ment outcomes during the process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' The first approach helps to find sequences which are more efficient also for purification of states with a low noise level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' The second approach gives a method to purify states which would not be purifyable otherwise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' To find an advantageous sequence in the first approach, we consider input states, which are slightly too noisy to be purified with the standard sequence from [21].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' We need sufficiently many states, so that we can estimate the probability to measure “±1” in step (iii) of the proto- col.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' If the purification works, the probability to measure “−1” tends to zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Otherwise it tends to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Knowing the probability at each step of the protocol, and there- fore on which party the purification fails, we can update our sequence such that the new sequence gives an advan- tage to the party which failed before.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' This process can be repeated until we do not find a better sequence of a certain length.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' We restricted ourselves to sequences of length nine.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' The best sequence we find in this way we call S1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' With the second approach, we give a way to purify states which can not be purified by sequence S1 because their initial fidelity is slightly beyond the threshold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' We start using sequence S1 and switch to sequence S2 de- pending on the measurement outcomes of step (iii).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Our switching condition is the following: After each measure- ment of step (iii), we evaluate the probability to measure “−1” for the given party.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Based on the last three proba- bilities associated to the same party, we take a decision to switch or not.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' For ⃗x being the vector of this three prob- abilities, where x3 is the newest probability, we switch, if the product of the vectors ⃗a⃗x exceed a bound b where ⃗a is a weight vector.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' To see the efficiency of our methods, we consider dif- ferent noise models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' We analyze the influence of global white noise described by the channel Ewn(ρ, p) = pρ + 1 − p 2n 1, (10) where n is the number of qubits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' In this section, the number of states is n = 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' We further analyse local noise channels given by E(ρ, p) = �n i=1 Ei(ρ, p), where Ei is pmin from [21] pmin from S1 pmin from adaptive protocol Ewn(ρ, p) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='6007 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='5878 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='5876 Edeph(ρ, p) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='8013 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='7803 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='7747 Edepo(ρ, p) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='8136 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='8136 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='8132 Table II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Noise thresholds pmin reproduced from Ref.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' [21], gained from our sequences S1 (see Table I), and for the adap- tive approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' In the case of Edepo(ρ, p) we found that the se- quence from Ref.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' [21] was already the best sequence of length 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Therefore there is no improvement of pmin in this case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' either the dephasing channel Ei deph(ρ, p) = pρ + 1 − p 2 (ρ + ZiρZi) (11) or the depolarizing channel Ei depo(ρ, p) = pρ + 1 − p 4 (ρ + XiρXi + YiρYi + ZiρZi).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' (12) The sequences, weight vectors and bounds we found to be optimal are given in Table I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' To compare the ap- proaches, we give the noise thresholds found in Ref.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' [21], obtained by our sequence S1, and by the adaptive ap- proach in Table II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' The sequences we found are also bet- ter in other perspectives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' If we apply the new sequences S1 nine rounds on given input states, we see that the output states have a higher fidelity then after purifying the same state nine rounds using the sequence given in Ref.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' [21].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Recycling of Discarded States If one wishes to purify a state using the CKDdV pro- tocol one needs a high number of input states in order to obtain one state of a certain fidelity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Let us count how many states we need to have one state after applying the protocol once.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' In step (0) of the protocol, one takes two input states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' One does not loose states by applying cnot in step (i).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' By applying the reduction operator Pv1,v2, ap- proximately 1 2 of the pairs are lost.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Since this operator is applied on two parties in step (ii), one needs approx- imately four pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' In step (iii), one measures outcome “+1” with a probability ⩽ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' This probability depends on the fidelity of the states and increases with increasing fi- delity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' So, in total, approximately 8 = 23 input states are required to obtain one output state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' To prepare a state for which we need to apply the protocol m times, we need more than 8m input stats.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' To purify, for example, a state of initial fidelity 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='93 to a state of fidelity of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='994, we need three steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' The required number of input states to obtain one output state is roughly 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='73 ≈ 660.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' If we want to purify the same state to a fidelity of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='999, which we reach after six steps, we need about 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='386 ≈ 346 000 input states to get one new state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' 6 It is natural to try to use the available quantum states more efficiently.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' In step (ii) of the CKDdV protocol, one performs a projective measurement and considers only one outcome, namely Pv1,v2, which we get with probabil- ity approximately 1 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' We suggest to use the states which were discarded because we measured something differ- ent than Pv1,v2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' The second reduction operator P ⊥ v1,v2 is perpendicular to Pv1,v2 and defined as P ⊥ v1,v2 = |0⟩⟨10| + |1⟩⟨01| = Pv1,v2(Xv1 ⊗ 1v2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' (13) As Pv1,v2, the operator P ⊥ v1,v2 is a positive map.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' It maps two qubits, which are in different states, to one qubit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' This can be seen as a different measurement outcome than Pv1,v2, or one may interpret the set {Pv1,v2, P ⊥ v1,v2} as a quantum instrument.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' In the original CKDdV protocol one keeps the state only after measuring Pb1,b2Pc1,c2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' There are three more possible measurement outcomes: Pb1,b2P ⊥ c1,c2, P ⊥ b1,b2Pc1,c2, and P ⊥ b1,b2P ⊥ c1,c2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' In the cases of measuring P ⊥ v1,v2 on at least one party, one obtains a post measure- ment state on which one can apply some corrections to get a state, which is similar to the input state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' One can collect these states and further purify them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' So, one can write down a modified protocol of the CK- DdV protocol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Here, we give the sub-protocol which re- duces noise on Alice’s qubits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' The sub-protocols for Bob and Charlie work equivalently.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Protocol 2 (Improved CKDdV protocol).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' (0) Alice, Bob, and Charlie share two copies of a state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' (i) Alice applies a local cnota1,a2 gate on her qubits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' (ii) Bob and Charlie perform a measurement on their qubits and measure the local reduction operators Pv1,v2 and P ⊥ v1,v2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' If the measurement outcome for Bob and Charlie was Pv1,v2, continue with step (iiia).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Else, con- tinue with (iiib) (iiia) After Bob and Charlie both measured Pv1,v2, Alice measures qubits a1 in the σx basis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' She keeps the state, if the outcome is “+1”, and discards it otherwise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' (iiib) After measuring P ⊥ v1,v2 on at least one pair of Bob and Charlie’s qubits, Alice measures her qubit a1 in the σz basis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' If she measure “+1”, she keeps the state as it is.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Otherwise, Bob and Charlie apply some local unitaries, which depend on the combinations of measurement out- comes in step (ii) and are given in Table III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' The key idea is that output states from step (iiib) can be collected and further purified.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' In case of measuring P ⊥ v1,v2 on at least one party, the protocol gives us a tran- sition |Hi,j,k⟩ |Hi′,j′,k′⟩ → |Hi′,j+j′,k+k′⟩ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' (14) The resulting state has in general a lower fidelity than the input state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' This is caused by the same reason of “copying” noise, as discussed before.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Since in the consid- ered case the protocol does not reduce noise, the fidelity drops.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Measurement local correction local correction outcomes Bob Charlie Pb1,b2P ⊥ c1,c2 Z 1 P ⊥ b1,b2Pc1,c2 1 Z P ⊥ b1,b2P ⊥ c1,c2 Z Z Table III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' In Protocol 2 step (iiib), Alice measures her qubit a1 in the Z basis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' If her outcome is “−1”, Bob and Charlie have to apply local corrections to their qubits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' The local corrections depend on their measurement outcomes from step (ii) and are given in this table.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' The first case is shown in Figure 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' 1 2 3 4 5 6 1 2 3 4 5 6 1 4 5 6 1 4 5 6 4 5 6 4 5 6 4 5 6 (i) cnot1,4 (ii) P2,5, P ⊥ 3,6 = (iiib) σ(1) z = +1 σ(1) z = −1 (iiib) Z5 Figure 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Modified Protocol 2 for the same initial states as shown in Figure 5 for the case to measure Pb1,b2P ⊥ c1,c2 in step (ii).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Alice performs a σ(1) z -measurement on her qubit 1 of the state in the second raw.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' If she gets outcome “+1” in step (iiib), the resulting state is the same as the initial state (qubits 4, 5 and 6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' If she gets outcome “−1”, Bob’s qubit 5 has a decoration, which he needs to correct.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' After Bob applied a local Z5 unitary on qubit 5, again the resulting state is the same as the initial state (qubits 4, 5 and 6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Note that this is only the case, if there is no noise on qubit 2 and 3, as shown in this figure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' In general one obtains the state given in Equation (14).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' An example for Protocol 2 is shown in Figure 6, where we assume the case that Bob measures P2,5 and Char- lie measures P ⊥ 3,6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' In this case, the local correction af- ter measuring outcome “−1” is applying a unitary Z5 at qubit 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Given a certain number of input states which we want to purify to a target fidelity, we obtain more output states 7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='9800 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='9825 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='9850 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='9875 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='9900 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='9925 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='9950 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='9975 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='0000 Initial Fidelity F0 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='00 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='25 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='50 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='75 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='00 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='25 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='50 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='75 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='00 Increase of number of output states in Figure 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Effect of using Protocol 2 instead of the orig- inal CKDdV protocol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' The input states are given by Ewn(|H0⟩⟨H0| , p).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' We first apply Protocol 1 three times and computed the fidelity F3 of the output states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Then, we apply Protocol 2 on the same input states and compare how many more output states of fidelity ⩾ F3 we get.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' The figure displays the increase of output states by using Protocol 2, depending on the fidelity F0 of the input states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' of the desired fidelity if we follow Protocol 2 instead of the original CKDdV protocol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' The effect in the cases w e tested turned out, however, to be small.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' As input states, we chose the state |H000⟩⟨H000| mixed with white noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' We first applied Protocol 1 three times, that is, once on each party, and computed the fidelity F3 of the output states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Then, we applied Protocol 2 on the same input states and compared how many more output states of fi- delity ⩾ F3 we get.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' In Figure 7 we show how much the number of output states increase by using Protocol 2, de- pending on the fidelity F0 of the input states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' In the cho- sen cases, we get approximately 4 � more output states from using Protocol 2 instead of the CKDdV protocol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' GENERALISATION TO MORE QUBITS The methods described here can also be applied to states with more qubits and different arrangement of edges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' We restrict our attention to hypergraphs which are k-regular and k-colorable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' A hypergraph is k-regular, if all edges e ∈ E have order k and it is k-colorable, if it is possible to color vertices of a hypergraph using k colors such that no two vertices of the same color share a com- mon edge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' For example, the hypergraph states shown in Figures 2 and 8 are 3-colorable and 3-regular.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' In this section we discuss purification protocols to hypergraph states of more than 3 qubits which are 3-colorable and 3-regular.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' In the following, we will denote the colors by A, B, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' The protocols can be generalised by letting all parties holding qubits of color A do what was described for Al- pmin from pmin sequence S1 SCKDdV from S1 Ewn(ρ3, p) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='6007 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='5878 ABC-CBA-ABC Ewn(ρ4, p) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='4633 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='4396 ABC-ACB-BCA Ewn(ρ5, p) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='3901 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='3486 ABC-ABC-CBA Ewn(ρ6, p) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='3341 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='3017 ABC-ACB-BAC* Edeph(ρ3, p) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='8013 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='7803 ABC-CBA-CBA Edeph(ρ4, p) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='8014 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='7803 ABC-CBA-CBA* Edeph(ρ5, p) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='8014 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='7803 ABC-CBA-CBA* Edeph(ρ6, p) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='8014 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='7803 ABC-CBA-CBA* Edepo(ρ3, p) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='8137 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='8136 ABC-CAB-BCA Edepo(ρ4, p) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='8306 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='8122 BAC-CBA-CAB Edepo(ρ5, p) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='8358 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='8128 ACB-BCA-CBA Edepo(ρ6, p) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='8144 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='8121 ABC-CBA-CAB Table IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Noise thresholds pmin for the sequence SCKDdV pro- posed in Ref.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' [21] and new sequences S1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' The index of the state gives the number of qubits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' In the case of Edepo(ρ3, p) we found that the sequence from Ref.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' [21] was already the best sequence of length 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Therefore there is no improvement of pmin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' When we found (non-trivially) different sequences of the same length, we marked them with a star (*).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' ice before.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' In the same way, parties holding a qubit of color B or C do what was described for Bob or Charlie, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' For a explicit formulation of the generalized protocol, see Ref.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' [21].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' We analysed linear three-colorable states with up to six qubits under the influence of global white noise, dephas- ing and depolarisation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' That is the states to which we want to purify are U123U234 |+⟩⊗4, U123U234U345 |+⟩⊗5, and U123U234U345U456 |+⟩⊗6, as shown in Figure 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' We compare the noise threshold pmin for the sequence pro- posed in Ref.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' [21] with new sequences S1, found using methods described in Section IV A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Our results are shown in Table IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' One sees that in the case of white noise for more qubits, the differences in the noise threshold pmin become more significant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' There- fore, especially in these cases it is more relevant to find good sequences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' For the tested states with dephasing and depolarisation noise, the noise threshold is constant or varies slightly, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' VI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' CONCLUSION AND OUTLOOK In this paper we discussed protocols for entanglement purification of hypergraph states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' First, we reformulated the CKDdV protocol in a graphical language.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' This offers a new way to understand the protocol, furthermore, it al- lows to search for systematic extensions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Consequently, we introduced several improvements of the original pro- tocol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' These improvements are based on different se- quences, adaptive schemes, as well as methods to recycle some of the unused states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' While these modifications are conceptually interesting and can indeed improve the 8 1 2 3 4 A1 B C A2 1 2 3 4 5 A1 B1 C A2 B2 1 2 3 4 5 6 A1 B1 C1 A2 B2 C2 Figure 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Linear 3-colorable and 3-regular hypergraph states with 4, 5, and 6 qubits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' The colors are denoted by A, B, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Note that two qubits which have the same color, for example qubits 1 and 4, still belong to different parties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Since we are restricted to local operations, we can only perform operations on qubits of the same party, that is in general not on qubits of the same color.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' performance in various examples, the amount of the im- provement in realistic examples seems rather modest.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' The problem of finding efficient sequences is also rel- evant for purification protocols for other states and was raised for example in Ref.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' [4] in the context of two- colorable graph states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' The methods developed here can be applied to this case, but also to all purification proto- cols which follow the concept introduced by Bennett et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' [1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' A further open question is how the effects of our meth- ods scale with the number of qubits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Another open ques- tion is whether Protocol 2 can be further improved so that the effect gets more significant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' VII.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' ACKNOWLEDGMENTS We thank Mariami Gachechiladze, Kiara Hansenne, Jan L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Bönsel, and Fabian Zickgraf for discussions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' This work was supported by the Deutsche Forschungsge- meinschaft (DFG, German Research Foundation, project numbers 447948357 and 440958198), the Sino-German Center for Research Promotion (Project M-0294), the ERC (Consolidator Grant 683107/TempoQ), the Ger- man Ministry of Education and Research (Project QuKuK, BMBF Grant No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' 16KIS1618K) and the Stiftung der Deutschen Wirtschaft.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' [1] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Bennett, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Bernstein, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Popescu, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Schu- macher, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' A 53, 2046 (1996).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' [2] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Bennett, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' DiVincenzo, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Smolin, and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Wootters, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' A 54, 3824 (1996).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' [3] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Deutsch, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Ekert, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Jozsa, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Macchiavello, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Popescu, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Sanpera, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' 77, 2818 (1996).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' [4] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Aschauer, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Dür, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='-J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Briegel, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' A 71, 012319 (2005).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' [5] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Kruszynska, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Miyake, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Briegel, and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Dür, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' A 74, 052316 (2006).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' [6] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Miyake and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Briegel, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' 95, 220501 (2005).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' [7] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Dür and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Briegel, Reports on Progress in Physics 70, 1381 (2007).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' [8] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Hein, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Eisert, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Briegel, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' A 69, 062311 (2004).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' [9] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Kruszynska and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Kraus, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' A 79, 052304 (2009).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' [10] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Qu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Wang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='-s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Li, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content='-r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Bao, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' A 87, 022311 (2013).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' [11] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Rossi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Huber, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Bruß, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Macchiavello, New J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' 15, 113022 (2013).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' [12] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Shor, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' A 52, R2493 (1995).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' [13] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Wagner, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Kampermann, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Bruß, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' A Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Theor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' 51, 125302 (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' [14] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Raussendorf and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Briegel, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' 86, 5188 (2001).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' [15] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Gachechiladze, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Gühne, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Miyake, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' A 99, 052304 (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' [16] V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Scarani, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Ací n, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Schenck, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Aspelmeyer, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' A 71, 042325 (2005).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' [17] O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Gühne, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Tóth, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Hyllus, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Briegel, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' 95, 120405 (2005).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' [18] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Gachechiladze, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Budroni, and O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Gühne, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' 116, 062321 (2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' [19] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Morimae, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Takeuchi, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Hayashi, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' A 96, 062321 (2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' [20] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Baccari, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Augusiak, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Š upić, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Tura, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Acín, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' 124, 020402 (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' [21] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Carle, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Kraus, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Dür, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' de Vicente, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' A 87, 012328 (2013).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' [22] O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Gühne, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Cuquet, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Steinhoff, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Moroder, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Rossi, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Bruß, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Kraus, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Macchiavello, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' A Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Theor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' 47, 335303 (2014).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' [23] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Gachechiladze, Quantum Hypergraph States and the Theory of Multiparticle Entanglement, Dissertation, Uni- versity of Siegen (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' [24] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Gachechiladze, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Tsimakuridze, and O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Gühne, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' A Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' Theor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
+page_content=' 50, 19LT01 (2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'}
diff --git a/ndAzT4oBgHgl3EQfqf0O/content/tmp_files/2301.01628v1.pdf.txt b/ndAzT4oBgHgl3EQfqf0O/content/tmp_files/2301.01628v1.pdf.txt
new file mode 100644
index 0000000000000000000000000000000000000000..673dad0fcc470e53704e12dee93d67eb88ecc78d
--- /dev/null
+++ b/ndAzT4oBgHgl3EQfqf0O/content/tmp_files/2301.01628v1.pdf.txt
@@ -0,0 +1,1717 @@
+1
+Task-Effective Compression of Observations for the Centralized Control
+of a Multi-agent System Over Bit-Budgeted Channels
+Arsham Mostaani, Student Member, IEEE, Thang X. Vu, Senior Member, IEEE,
+Symeon Chatzinotas, Fellow Member, IEEE, and Bj¨orn Ottersten, Fellow Member, IEEE
+Abstract—We consider a task-effective quantization problem
+that arises when multiple agents are controlled via a centralized
+controller (CC). While agents have to communicate their obser-
+vations to the CC for decision-making, the bit-budgeted commu-
+nications of agent-CC links may limit the task-effectiveness of the
+system which is measured by the system’s average sum of stage
+costs/rewards. As a result, each agent should compress/quantize
+its observation such that the average sum of stage costs/rewards
+of the control task is minimally impacted. We address the
+problem of maximizing the average sum of stage rewards by
+proposing two different Action-Based State Aggregation (ABSA)
+algorithms that carry out the indirect and joint design of control
+and communication policies in the multi-agent system. While
+the applicability of ABSA-1 is limited to single-agent systems, it
+provides an analytical framework that acts as a stepping stone
+to the design of ABSA-2. ABSA-2 carries out the joint design
+of control and communication for a multi-agent system. We
+evaluate the algorithms - with average return as the performance
+metric - using numerical experiments performed to solve a
+multi-agent geometric consensus problem. The numerical results
+are concluded by introducing a new metric that measures the
+effectiveness of communications in a multi-agent system.
+Index Terms—Semantic communications, task-effective data
+compression, goal-oriented communications, communications for
+machine learning, multi-agent systems, reinforcement learning.
+I. INTRODUCTION
+As 5G is rolling out, a wave of new applications such
+as the internet of things (IoT), industrial internet of things
+(IIoT) and autonomous vehicles is emerging. It is projected
+that by 2030, approximately 30 billion IoT devices will be
+connected [1]. With the proliferation of non-human types of
+connected devices, the focus of the communications design is
+shifting from traditional performance metrics, e.g., bit error
+rate and latency of communications to the semantic and
+task-oriented performance metrics such as meaning/semantic
+error rate [2], [3] and the timeliness of information [4].
+To evaluate how efficiently the network resources are being
+utilized, one could traditionally measure the sum rate of a
+network whereas in the era of the cyber-physical systems,
+given the resource constraints of the network, we want to
+understand how effectively one can conduct a (number of)
+task(s) in the desired way [5], [6]. We are witnessing a
+paradigm shift in communication systems where the targeted
+performance metrics of the traditional systems are no longer
+valid. This imposes new grand challenges in designing the
+communications towards the eventual task-effectiveness [6].
+The authors are with the Centre for Security Reliability and Trust, Uni-
+versity of Luxembourg, Luxembourg. Emails: {arsham.mostaani, thang.vu,
+symeon.chatzinotas, bjorn.ottersten}@uni.lu
+This work is supported by European Research Council (ERC) via the project
+AGNOSTIC (Grant agreement ID: 742648).
+Environment
+Environment
+Controller 2
+Control
+Control
+Controller 2
+Sensor 1
+Sensor 2
+Comm.
+Comm.
+Local Observation
+Local Observation
+Reward/ cost
+Reward/ cost
+a)
+b)
+Local Observation
+Stage reward/ cost
+Local Observation
+Stage reward/ cost
+Figure 1. Task-effective communications for a) an estimation vs. b) a control
+task - the orange dashed box is detailed in Fig. 2 and Fig. 3.
+This line of research is also driven partly due to the success
+of new machine learning technologies/ algorithms under the
+title of ”emergent communications” in multi-agent systems [7].
+Transfer of these new technologies/ideas to communication en-
+gineering is anticipated to have a disruptive effect in multiple
+domains of the design of communication systems.
+According to Shannon and Weaver, communication prob-
+lems can be divided into three levels [8]: (i) technical problem:
+given channel and network constraints, how accurately can
+the communication symbols/bits be transmitted? (ii) semantic
+problem: given channel and network constraints, how accu-
+rately the communication symbols can deliver the desired
+meaning? (iii) effectiveness problem: given channel and net-
+work constraints, how accurately the communication symbols
+can help to fulfil the desired task? While the traditional com-
+munication design addresses the technical problem, recently,
+the semantic problem [2], [3], [5], [9], [10] as well as the
+effectiveness problem [6], [11]–[18] have attracted extensive
+research interest.
+In contrast to Shannon’s technical-level communication
+framework, semantic communication can enhance perfor-
+mance by exploiting prior knowledge between source and
+destination [4], [19]. The semantic-based designs, however, are
+not necessarily task-effective [20]. One can design transmitters
+which compress the data with the least possible compromise
+on the semantic meaning being transmitted [2], [3] while
+the transmission can be task-unaware [21]. In contrast to
+semantic level and technical level communication design,
+the performance of a task-effective communication system is
+ultimately measured in terms of the average return/cost linked
+to the task [11]. In the (task-)effectiveness problem, we are
+not concerned only about the communication of meaning but
+arXiv:2301.01628v1 [cs.IT] 4 Jan 2023
+
+2
+also about how the message exchange is helping the receiving
+end to improve its performance in the expected cost/reward of
+an estimation task [4], [13], [14], [16], [22] or a control task
+[11], [12], [14], [17], [18], [23], [24].
+There are fundamental differences between the design of
+task-effective communications for an estimation vs. a control
+task - Fig. 1. (i) In the latter, each agent can produce a
+control signal that directly affects the next observations of
+the agent. Thus, in control tasks the source of information -
+local observations of the agent - is often a stochastic process
+with memory - e.g. linear or Markov decision processes -
+[11], [17], [18]. In the estimation tasks, however, the source of
+information is often assumed to be an i.i.d. stochastic process
+[13], [16], [22]. (ii) In the control tasks, a control signal often
+has a long-lasting effect on the state of the system more
+than for a single stage/time step e.g., a control action can
+result in lower expected rewards in the short run but higher
+expected rewards in the long run. This makes the control tasks
+intrinsically sensitive to the time horizon for which the control
+policies are designed. Estimation tasks, specifically when the
+observation process is i.i.d., can be solved in a single stage/
+time step - since there is no influence from the solution of one
+stage/ time step to another i.e., each time step can be solved
+separately [22], [25]. (iii) The cost function for estimation
+tasks is often in the form of a difference/distortion function
+while in the control tasks it can take on many other forms.
+In this paper, we focus on the effectiveness problem for
+the control tasks. In particular, we investigate the distributed
+communication design of a multiagent system (MAS) with
+the ultimate goal of maximizing the expected summation of
+per-stage rewards also known as the expected return. Multiple
+agents select control actions and communicate in the MAS
+to accomplish a collaborative task with the help of a central
+controller (CC) - i.e. the communication network topology of
+the MAS is a star topology with the hub node being the central
+controller and the peripheral nodes being the agents - Fig. 2.
+The considered system architecture can find applications in
+several domains such as Internet of Things, emerging cyber-
+physical systems, real-time interactive systems, vehicle-to-
+infrastructure communication [26] and collaborative percep-
+tion [27].
+A. Related works: Task-effective communications for control
+tasks
+Authors in [11], [12], [14], [17], [18], [23], [24] consider
+task-effective communication design under different settings.
+While [12], utilizes the task-effective communication design
+for the specific problem of the design of application-tailored
+protocols over perfect communication channels, the communi-
+cation channel is considered to be imperfect in [11], [14], [17],
+[18], [23], [24]. Authors in [14] provide algorithmic contribu-
+tions to the design of task-effective joint source channel coding
+for single agent systems. Task-effective joint source and chan-
+nel coding for MAS is targeted by [11], [14], [17], whereas
+[18], [23] are focused on task-effective data compression and
+quantization. Similar to the current paper, a star topology
+18
+4th CET – Arsham Mostaani
+New Results for the Centralized Architecture
+Bit-budgeted Com.
+Perfect Com.
+a)
+b)
+Processing and
+comm. power
+Sensing, actuation,
+comm. and
+processing power
+Sensing, comm.
+and processing
+power
+Figure 2. Communication topology and its applicable scenarios a) Centralized
+control of an MAS with collocated actuators and sensors, b) Distributed
+sensing with a single controller collocated with a single actuator. The orange
+dashed box is detailing the same box in Fig. 1 and Fig. 3 .
+for the inter-agent communication is considered in [11], [12]
+whereas [12] assumes perfect communications between the
+hub node and the peripherals and [11] assumes imperfect
+communication channels at the down-link of the peripheral
+nodes. In contrast to all the above-mentioned work, this paper
+is - to the best of our knowledge - the first to study the star
+topology with the uplink (agent to hub) channel be imperfect
+(bit-budgeted) - Fig. 2. Accordingly, each agent observes
+the environment and communicates an abstract version of
+its local observation to the CC via imperfect (bit-budgeted)
+communication channels - red links in Fig. 2. Subsequently,
+CC produces control actions that are communicated to the
+agents via perfect communication channels - black links in
+Fig. 2. The control actions are selected by the CC such
+that they maximize the average return of the collaborative
+task, where the return is a performance metric linked to the
+accomplishment of the task.
+B. Contributions
+In our earlier work [18], we have developed a generic
+framework to solve task-oriented communication problems -
+for a multi-agent system (MAS) with full mesh connectivity.
+The current work can be considered as an adoption of that
+framework to a new problem setting for the design of task-
+effective communications where agents follow a star network
+topology for their connectivity. In this direction, the current
+work transcends the applicability of the proposed framework
+beyond the specific problem that was solved in [18] and
+provides further insights into how the framework can be
+used in wider terms and under a wider range of settings. In
+particular the contributions of this work are listed below.
+• Firstly, we consider a novel problem setting in which an
+MAS is controlled via a central controller who has access
+to agents’ local observations only through bit-budgeted
+distributed communications. This problem setting can
+
+3
+be used in collaboration perception systems as well as
+vehicle-to-infrastructure communications, which cannot
+been addressed by the problem settings investigated in
+the prior similar art.
+• Secondly, our analytical studies establish the relationship
+between the considered joint communication and con-
+trol design problem and conventional data quantization
+problems. In particular, lemma 1 shows how the problem
+approached in this paper is a generalized version of the
+conventional data quantization. This formulation is useful
+as it helps to find an exact solution to the problem
+under stronger conditions via ABSA-1 and under milder
+conditions via ABSA-2.
+• Moreover, our analytical studies help us to craft an indi-
+rect 1 task-effective data quantization algorithm - ABSA-
+2. Designing a task-effective data quantization for ABSA-
+2 can equivalently be translated as an indirect approach
+to feature selection for an arbitrary deep Q-network.
+Relying on the analysis carried out for ABSA-1, ABSA-
+2 designs distributed and bit-budgeted communications
+between the agents and CC. ABSA-2 is seen to approach
+optimal performance by increasing the memory of the
+CC. In fact, increasing the memory of CC leads to higher
+computational complexity. Therefore, ABSA-2 is said to
+strike a trade-off between computational complexity and
+task efficiency.
+• Numerical experiments are carried out on a geomet-
+ric consensus task to evaluate the performance of the
+proposed schemes in terms of the optimality of the
+MAS’s expected return in the task. ABSA-1 and ABSA-
+2 are compared with several other benchmark schemes
+introduced by [18], in a multi-agent2 scenario with local
+observability and bit-budgeted communications.
+• Finally, we will introduce a new metric, called task rele-
+vant information, for the measurement of effectiveness in
+task-oriented communication policies that - in compari-
+son with the existing metrics such as positive listening
+and positive signalling - better explains the behaviour
+of a variety of task-effective communication schemes.
+The proposed metric is capable of measuring the effec-
+tiveness of a task-oriented communication/compression
+policy without the need of testing a jointly designed
+control policy and testing the jointly designed policies
+in the desired task.
+C. Technical approach
+Our goal is to perform an efficient representation of the
+agents’ local observations to ensure meeting the bit-budget
+of the communication links while minimizing the effect of
+1By an indirect algorithm here we mean an approach that is not dependent
+on our knowledge from a particular task. Indirect approaches are applicable
+to any/(wide range of) tasks. In contrast to indirect schemes, we have direct
+schemes that are specifically designed for a niche application [16]. As defined
+by [6]: ”the direct schemes aim at guaranteeing or improving the performance
+of the cyber-physical system at a particular task by designing a task-tailored
+communication strategy”.
+2Due to the complexity related issues explained in section IV, the numerical
+results are limited to two-agent and three-agent scenarios.
+quantization on the average return of the task. To achieve
+this, we first need to design task-effective data quantization
+policies for all agents. In task-effective data quantization,
+one needs to take into account the properties of the average
+return function and the optimal control policies associated with
+the task [15]. In addition to the design of the quantization
+policies for all agents, we also need the control policy of
+the CC to be capable of carrying out near-optimal decision-
+making despite its mere access to the quantized messages -
+resulting in a joint control and data compression problem. We
+formulate the joint control and data compression problem as
+a generalized form of data compression: task-oriented data
+compression (TODC). Following this novel problem formula-
+tion, we propose two indirect action-based state aggregation
+algorithms (ABSA): (i) ABSA-1 provides analytical proof for
+a task-effective quantization i.e, with optimal performance
+in terms of the expected return. In this direction, ABSA-1
+relaxes the assumption of the lumpability of the underlying
+MDP, according to which [18][condition. 6], the performance
+guarantees of the proposed method were established. Since
+ABSA-1 is only applicable when the system is composed of
+one agent and the CC we also propose ABSA-2. Following
+the analytical results of ABSA-1, given the help of MAP
+estimation to relax the aforementioned limitation of ABSA-1,
+and benefiting from a DQN controller at the CC; ABSA-2 will
+be introduced as a more general approach. (ii) ABSA-2 solves
+an approximated version of the TODC problem and carries
+out the quantization for any number of agents communicating
+with the CC. Thanks to a deep Q-network controller utilized
+at the CC, ABSA-2 can solve more complex problems where
+the controller benefits from a larger memory. Thus, ABSA-2
+allows trading complexity for communication efficiency and
+vice versa. Finally, we will evaluate the performance of the
+proposed schemes in the specific task: a geometric consensus
+problem under finite observability [28].
+D. Organization
+The rest of this paper is organized as follows. Section II
+describes the MAS and states the joint control and commu-
+nication problem. Section III proposes two action-based state
+aggregation algorithms. Section IV shows the performance of
+the proposed algorithms in a geometric consensus problem.
+Finally, Section V concludes the paper. For the reader’s
+convenience, a summary of the notation that we follow in this
+paper is given in Table I. Bold font is used for matrices or
+scalars which are random and their realizations follow simple
+font.
+II. SYSTEM MODEL AND PROBLEM STATEMENT
+The problem setting we introduce here can be used to
+analyse both scenarios illustrated in Fig. 2. Nevertheless, to
+use our language consistently, we focus on scenario (a) of
+that figure throughout the manuscript. In particular, when we
+use the term ”agent” we refer to an object which certainly
+has all the following hardware capabilities: sensing, actuation,
+communication and data processing. A MAS, however, may
+not be comprised of mere agents, but of a combination
+
+4
+of agents and perhaps other objects that has at least the
+hardware capabilities for communication and data processing
+power. The central controller here is supposed to have the
+hardware capability to process relatively larger data as well as
+the capability of communications. The interactions inside the
+MAS and outside the MAS with the environment are illustrated
+in Fig. 3.
+A. System model
+We consider a MAS in which multiple agents i ∈ N =
+{1, 2, ..., N} collaboratively solve a task with the aid of a CC.
+Following a centralized action policy, CC provides the agents
+with their actions via a perfect communication channel while
+it receives the observations of agents through an imperfect
+communication channel 3. The considered setting is similar to
+conventional centralized control of MASs [18], [30], except
+for the fact that the communications from the agents to
+the CC are transmitted over a bit-budgeted communication
+channel. The agent-hub communications are considered to be
+instantaneous and synchronous [18]. This is in contrast with
+the delayed [17], [31] and sequential/iterative communication
+models [32]–[34]. We note that there is no direct inter-agent
+communication in the considered system - communications
+occur only between agents and the central controller. The
+system runs on discrete time steps t. The observation of each
+agent i at time step t is shown by oi(t) ∈ Ω and the state
+s(t) ∈ S of the system is defined by the joint observations
+s(t) ≜ ⟨o1(t), . . . , oN(t)⟩4 . The control action of each agent
+i at time t is shown by mi(t) ∈ M, and the action vector
+m(t) ∈ MN of the system is defined by the joint actions
+m(t) ≜ ⟨m1(t), ..., mN(t)⟩. The observation space Ω, state-
+space S, and action space M are all discrete sets. The environ-
+ment is governed by an underlying5 Markov Decision Process
+3In this work we follow a common assumption used in the networked
+control literature [29] according to which the bit-budget only limits the uplink
+communications of the agents and not their downlink. Accordingly, the agents
+select their control actions as is dictated to them by the central controller.
+4According to this definition, at any given time t the observations of any
+two agent i, j ∈ N are linearly independent in the Euclidean space. The same
+conditions are true for the control actions of arbitrary agents.
+5As defined in the literature [10], the underlying MDP’ is the horizon-T ′
+MDP defined by a hypothetical single agent that takes joint actions m(t) ∈
+MN and observes the nominal state s(t) ≜ ⟨o1(t), . . . , oN(t)⟩ that has
+the same transition model T(·) and reward model r(·) as the environment
+experienced by our MAS.
+Symbol
+Meaning
+x(t)
+A generic random variable generated at time t
+x(t)
+Realization of x(t)
+X
+Alphabet of x(t)
+|X|
+Cardinality of X
+px
+�
+x(t)
+�
+Shorthand for Pr
+�
+x(t) = x(t)
+�
+H
+�
+x(t)
+�
+Information entropy of x(t) (bits)
+X−x
+X − {x}
+Ep(x){x}
+Expectation of the random variable X over the
+probability distribution p(x)
+tr(t)
+Realization of the system’s trajectory at time t
+Table I
+TABLE OF NOTATIONS
+𝑃𝑟 𝑠′ 𝑠, 𝑚)
+Environment
+Central Controller
+𝜋1
+𝑐
+𝜋𝑚
+Agent 1
+Actuator
+𝜋2
+𝑐
+Actuator
+Agent N
+𝑐1
+𝑐𝑁
+Channel
+log2 |𝒞| ≤ 𝑅
+log2 |𝒞| ≤ 𝑅
+ǁ𝑐1
+ǁ𝑐𝑁
+𝑚1
+𝑚𝑁
+𝑚2
+𝑚1
+𝑜1
+𝑜2
+Channel
+Figure 3. Illustration of the interactions of the CC and agents for the control
+of the environment. The red link shows the communication channels that are
+bit-budgeted - implying the local (and not global) observability of the CC.
+The orange dashed box is detailing the same box in Fig. 1 and Fig. 2 .
+that is described by the tuple M =
+�
+S, MN, r(·), γ, T(·)
+�
+,
+where r(·) : S × MN → R is the per-stage reward function
+and the scalar 0 ≤ γ ≤ 1 is the discount factor. The function
+T(·) : S × MN × S → [0, 1] is a conditional probability mass
+function (pmf) which represents state transitions such that
+T
+�
+s(t + 1), s(t), m(t)
+�
+= Pr
+�
+s(t + 1)|s(t), m(t)
+�
+. According
+to the per-stage reward signals, the system’s return within the
+time horizon T ′ is denoted by
+g(t
+′) =
+�T ′
+t=t′ γt−1r
+�
+o1(t), ..., oN(t), m1(t), ..., mN(t)
+�
+.
+(1)
+While the system state is jointly observable by the agents
+[35], each agent i’s observation oi(t) is local 6. Once per time
+step, agent i ∈ N is allowed to transmit its local observations
+through a communication message ci(t) to the CC. The
+communications between agents and the central controller are
+done in a synchronous (not sequential) and simultaneous (not
+delayed) fashion [17]. Each agent i generates its communi-
+cation message ci(t) by following its communication policy
+πc
+i (·) : Ω → C. In parallel to all other agents, agent i
+follows the communication policy πc
+i (·) to map its current
+observation oi(t) to the communication message ci(t) which
+will be received by the central controller in the same time-
+step t. The code-book C is a set composed of a finite number
+of communication code-words s c, c′, c′′, ..., c(|C|−1) - we use
+the same notation to refer to the different members of the
+action, observation and state spaces too. Agents’ communica-
+tion messages are sent over an error-free finite-rate bit pipe,
+with its rate constraint to be R ∈ R (bits per channel use)
+or equivalently (bits per time step). As a result, the size
+of the quantization codebook should follow the inequality
+|C| ≤ 2R. The CC exploits the received communication
+messages c(t) ≜ ⟨c1(t), ..., cN(t)⟩ within the last d number
+of time-steps to generate the action signal m(t) following the
+control policy πm(·) : CNd → MN. Based on the above
+description, the environment from the point of view of the CC
+6In our problem setting, each agent does not see the environment as an MDP
+due to their local observability. We only assume the presence of an underlying
+MDP for the environment, which is widely adopted in the literature for the
+reinforcement learning algorithm, e.g., [36] [37]. We have this assumption as
+our performance guarantees rely on the optimality of the solution provided
+for the control task, which is also assumed in [7], [10]. Let us recall that
+throughout all of our numerical studies, even the CC, given joint observations
+of all agents, cannot observe the true/nominal state of the environment.
+
+5
+as well as from the agent’s point of view is not necessarily an
+MDP - as none is capable of viewing the nominal state of the
+environment.
+B. Problem statement: Joint Control and Communication De-
+sign (JCCD) problem
+Now we define the JCCD problem. Let M be the MDP
+governing the environment and the scalar R ∈ R to be the
+bit-budget of the uplink of all agents. At any time step t′,
+we aim at selecting the tuple π = ⟨πm(·), πc⟩ with πc ≜
+⟨πc
+1(·), ..., πc
+N(·)⟩ to solve the following variational dynamic
+programming
+argmax
+π
+Eπ
+�
+g(t′)
+�
+;
+s.t. |C| ≤ 2R,
+(2)
+where
+the
+expectation
+is
+taken
+over
+the
+joint
+pmf
+of
+the
+system’s
+trajectory
+{tr}T ′
+t′
+=
+o1(t′), ..., oN(t′), m(t′), ..., o1(T ′), ..., oN(T ′), m(T ′),
+when
+the agents follow the policy tuple π. In the next section,
+similar to [18] we will disentangle the design of action
+and communication policies via action-based quantization of
+observations. In contrast to [18], here the communication
+network of the MAS is assumed to follow a star topology. The
+idea behind this disentanglement is to extract the features of
+the control design problem that can affect the communication
+design and to take them into account while designing the
+communications. Thus our communication design will be
+aware of the key features of the control task. We extract the
+key features of the control task using analytical techniques
+as well as reinforcement learning [17], [18]. In fact, the new
+communication problem called TODC, will no longer be
+similar to the conventional communication problems, as it is
+inspired by the JCCD problem.
+In [18], [23], authors use the value of agents’ observations
+for the given task as the key feature of the control task
+considered in the communication design. Accordingly, the idea
+was to cluster together the observation points that have similar
+values. In contrast to [18], [23], which considers the value of
+observations as an explicit key feature of the control task, here
+we consider the optimal control/action values assigned to each
+observation as the key feature. Accordingly, ABSA clusters the
+observation values together, whenever the observation points
+have similar optimal control/action values assigned to them.
+Action-based state aggregation has been already introduced in
+the literature of reinforcement learning as a means for reducing
+the complexity of the reinforcement learning algorithms while
+maintaining the average return performance [38], [39].
+III. ACTION-BASED LOSSLESS COMPRESSION OF
+OBSERVATIONS
+In this section, we will set yet another example - in addition
+to [18] - for the use of a generic framework to solve JCCD
+problem. In [18], a similar problem is solved for distributed
+control and quantization, wherein, the authors disentangle the
+design of task-oriented communication policies and action
+policies given the aid of a hypothetical functional Πm∗. In
+particular, the functional Πm∗ is a map from the vector space
+Kc of all possible communication policies πc to the vector
+space Km of optimal corresponding control policy πm∗(·).
+Upon the availability of the functional Πm∗, wherever the
+function πm appears in the JCCD problem, it can be replaced
+with Πm∗(πc) resulting in a novel problem in which only
+the communication policies πc are to be designed. While in
+[18], authors use an approximation of Πm∗(πc) to obtain a
+task-oriented quantizer design problem, in the current work
+we derive an exact solution for a simplified version of (3) -
+where the number of agents communicating with the central
+controller is limited to one agent. To adapt ABSA to the
+generic setting of the problem (3), in ABSA-2, we will lift
+this limitation given the aid of an approximation technique.
+The JCCD problem can already be formulated as a form
+of data-quantization problem. Lemma 1, identifies the quan-
+tization metric that we aim to optimize in this paper. It
+reformulates the JCCD problem as a novel generalized data
+quantization problem.
+Lemma 1. The JCCD problem (2) can also be expressed as
+a generalized data quantization problem as follows
+argmin
+π
+Ep(s(t))
+���V π∗�
+s(t)
+�
+− V πm�
+c(t)
+����,
+s.t. |C| ≤ 2R, (3)
+where the communication vector c(t) generated by πc is a
+quantized version of the system’s state s(t).
+Proof. Appendix A.
+■
+In contrast to the classic data-quantization problems, here
+the distortion metric, measures the difference between two dif-
+ferent functions of the original signal and its quantized version
+- namely V π∗(·) and V πm(·) - thus the distortion measure that
+we aim to optimize by solving (3) is not conventional. In fact,
+the variational minimization problem is solved over the vector
+space of joint quantization policies πc and action policy πm
+functions.
+A. ABSA-1 Algorithm
+The applicability of the proposed ABSA-1, is limited to two
+mathematically equivalent scenarios: (i) we have a single agent
+communicating to the CC - consider the Fig. 2-a, with only one
+agent connected to the CC - or (ii) that the agents communicate
+with the CC through a relay. In the latter scenario, the relay
+has full access to the agents’ communication observation, i.e.,
+oi, ∀i ∈ N, while the relay to CC channel is bit-budgeted.
+This limited scenario is useful for us to facilitate our analytical
+studies on the problem (3), allowing us to establish theoretical
+proof for the losslessness of compression in ABSA-1 as well
+as its optimal average return performance. These statements
+will be confirmed by Lemma
+2 - the results of which
+will also be useful to design ABSA-2. The central idea of
+ABSA-1 is to represent any two states s(i), s(j) using the
+same communication message c iff π∗�
+s(i)�
+= π∗�
+s(j)�
+,
+where π∗(·) : S → MN is the optimal control policy of
+the agents, given the access of observations from all agents.
+Thus, ABSA-1 and ABSA-2 solve the JCCD problem at three
+different phases: (i) solving the centralized control problem
+under perfect communications via reinforcement learning i.e.,
+
+6
+Q-learning, to find π∗(·)7, (ii) solving the task-oriented data
+quantization problem to find πc via a form of data clustering,
+(iii) finding the πm corresponding to πc.
+In order to explain ABSA-1, we introduce the problem of
+task-oriented data compression with centralized control. TBIC
+is derived using similar techniques in [18] but for a different
+setting i.e., the communication network of MAS has a star
+topology. The TBIC problem is no longer a joint control and
+communication problem but is a quantization design problem
+in which the features of the control problem are taken into
+account. To arrive to TODC problem from the JCCD problem,
+we use the functional Πm∗ to replace πm(·) with Πm∗�
+πc�
+.
+Upon the availability of Πm∗, by plugging it into the JCCD
+problem (2), we will have a new problem
+argmin
+πc
+Ep(s(t))
+���V π∗�
+s(t)
+�
+− V Πm∗�
+πc��
+c(t)
+����,
+s.t. |C| ≤ 2R,
+(4)
+where we maximize the system’s return with respect to
+only the communication policies πc(·) of the local relay. The
+optimal control policy πm∗(·) of the CC is automatically
+computed by the mapping Πm∗�
+πc(·)
+�
+. The problem is called
+here as the TODC problem. Upon the availability of Πm∗,
+the JCCD problem (2) can be reduced to (4). Definition 1
+is provided to formalize a precise approach to solve (4) via
+obtaining the communication policy of the relay πc(·) as well
+as the corresponding Πm∗, to solve (2).
+Definition 1. Quantization and control policies in ABSA-1:
+The communication policy πc,ABSA−1(·) designed by
+ABSA-1 will be obtained by solving the following k-median
+clustering problem
+min
+P
+�|C|
+i=1
+�
+s(t)∈Pi
+���π∗�
+s(t)
+�
+− µi
+���,
+(5)
+where P = {P1, ..., PB} is a partition of S and µi is the
+centroid of each cluster i. The communication policy of ABSA-
+1 - πc,ABSA−1(·) - is an arbitrary non-injective mapping such
+that ∀k ∈ {1, ..., B} : πc,ABSA−1(s) = c(k) if and only if
+s ∈ Pk. Now let Cg be a function composition operator such
+that Cgf = g ◦ f. We define the operator Πm∗ ≜ Cg, with
+g = π∗�
+πc,ABSA−1−1(·)
+�8 .
+The optimality of the proposed ABSA-1 algorithm is sub-
+sequently provided in Theorem 2.
+Lemma 2. The communication policy πc,ABSA−1 - as de-
+scribed by Definition 1 - will carry out lossless compression
+of observation data w.r.t. the average return if |C| ≥ |M|N.
+Proof. Appendix B.
+■
+Remark: ABSA-1 will also carry out lossless compression
+of observation data with respect to the distortion measure
+7ABSA’s bottleneck arises from the increasing complexity of Q-learning as
+agents increase in number N. Similar limitations are in place for any other
+algorithm that requires a centralized training phase [7], [30]
+8Note that as πc,ABSA−1(·) is non-injective, its inverse would not produce
+a unique output given any input. Thus, by π∗�
+πc,ABSA−1−1(c′)
+�
+we mean
+π∗�
+s′�
+, where s′ can be any arbitrary output of πc,ABSA−1−1(c′).
+introduced in problem (3). Given the proofs of lemma 2 and
+lemma 1, the proof of this remark is straightforward and is
+therefore, omitted.
+The losslessness of quantization in ABSA-1 implies that
+the πABSA−1 will result in no loss of the system’s average
+return, compared with the case where the optimal policy π∗(·)
+is used to control the MAS under perfect communications.
+Consequently, the control policy πm,ABSA−1(·) is optimal. Let
+us recall once again that here, we do not use a conventional
+quantization distortion metric, we select a representation of
+local observation in such a way that the conveyed message
+maximizes the average task return.
+Note that in [7], the authors do not find the higher order
+function Πm∗ that reduces the joint communications and
+control problem to a task-oriented communication design -
+instead they solve an approximated version of the task-oriented
+communication design problem. In this paper, however, we
+introduce a closed form Πm∗ by ABSA-1 that can map every
+communication policy πc,ABSA−1 introduced by ABSA-1, to
+the exact optimal control policy. This implies that the solutions
+provided by ABSA-1 are also the optimal solutions of the joint
+communication and control design (JCCD) problem.
+B. ABSA-2 Algorithm
+We saw earlier in lemma 2 that the communication policy
+obtained by solving the problem 5 is optimal and can result
+in a lossless average return performance when |C| ≥ |M|N.
+To solve the problem 5, however, we need to know π∗�
+s(t)
+�
+.
+This is a limiting assumption that in ABSA-1 can be translated
+to two different system models which are less general than
+the system pictured in Fig. 3: (i) presence of an extra relay
+between the agents and the central controller where the relay
+has perfect downlink channels to agents and a single bit-
+budgeted channel to the CC. (ii) The MAS is only composed
+of one single agent and a CC where the uplink of the agent
+is bit-budgeted but its downlink is a perfect channel.
+Our second proposed algorithm ABSA-2 removes the need
+to know π∗�
+s(t)
+�
+and can run under the more general settings
+shown in Fig. 3. This is done by approximating the local
+element m∗
+i (t) of π∗�
+s(t)
+�
+= ⟨m1 ∗ (t), ..., mN ∗ (t)⟩ at
+agent agent i given the local observation of this agent oi(t).
+That is, given a centralized training phase, we will have
+access to the empirical joint distribution of p(oi, m∗
+i ), using
+which we can obtain a numerical MAP estimator of
+ˆ
+m∗i.
+Thus ABSA-2 allows for fully distributed communication
+policies. In particular, the encoding of the communication
+messages of each agent is carried out separately by them
+before they communicate with CC or any other agent. This
+form of encoding is often referred to as distributed encoding.
+Furthermore, the encoding carried out by ABSA-2 at each
+agent is a low-complexity and low-power process that requires
+no inter-agent communications before hands. In this case, each
+agent directly communicates its encoded observations to the
+CC via a bit-budgeted communication channel. In order to
+improve the learning efficiency at CC, it can take into account
+all the communications received in the time frame [t − d, t]
+to make a control decision m(t). Therefore, the ABSA-2
+
+7
+𝜋𝑖
+∗ ⋅ : Ω → ℳ
+Ω
+Ω ⊂ ෑ
+𝑖=1
+𝑁
+ℝ
+Ω × ℳ
+Clustering observation points
+over their action values
+𝒫i,1
+𝒫𝑖,2
+𝒫𝑖,3
+ABSA-2
+Figure 4.
+Abstract representation of states in ABSA-2 with |C| = 3 and |M| = 5 - |M| is represented by the number of shapes selected to show
+the observation points and |C| is represented by the number of clusters shown in the right subplot. The left subplot shows the observation points prior to
+aggregation. During a centralized training phase we first compute π∗(·) according to which π∗
+i (·) : Ω → M can be obtained. We use the surjection π∗
+i (·)
+to map a high dimensional/precision observation space to a low dimensional/precision space. The middle subplot shows the observation points together with
+the action values assigned to them - each unique shape represents a unique action value. This new representation of the observation points, embeds the
+features of the control problem into the data quantization problem. Finally, we carry out the clustering of observation points according to their action
+values - all observation points assigned to (a set of) action values are clustered together. The right subplot shows the aggregated observation space, where
+all the observation points in each cluster will be represented using the same communication message. The centralized controller which is run using DQN,
+observes the environment at each time step, through all these aggregated observations/communications it receives from all the agents.
+algorithm can strike a trade-off between the complexity of
+the computations carried out at the CC - directly impacted by
+the value of d - and effectiveness of agents’ communications
+- inversely impacted by the value of |C|. Moreover, ABSA-2
+is straightforwardly extendable to the different values of |C|
+per each agent i, instead of having only one fixed bit-budget
+R = log2 |C| for all agents.
+As illustrated in Fig. 4, ABSA-2, each agent i obtains a
+communication policy function πc
+i (·) by solving a clustering
+problem over its local observation space instead of the global
+state space, formulated as follows:
+min
+Pi
+�|C|
+j=1
+�
+oi(t)∈Pi,j
+���˜π∗
+i (oi(t)) − µi,j
+���,
+(6)
+where Pi = {Pi,1, ..., Pi,|C|} is a partition of Ω, and
+˜π∗
+i (oi(t)) = argmaxm∗
+i pπ∗(m∗
+i |oi(t)),
+(7)
+and m∗
+i is the optimal action of agent i, which is i-th
+element of m∗ ≜ π∗�
+o1(t), ..., oN(t)
+�
+. Thus ˜π∗
+i (oi(t)) is the
+maximum aposteriori estimator of m∗
+i = π∗�
+s(t)
+�
+given the
+local observation oi(t).
+Once the clustering in (6) is done, each agent i will
+train its local communication policy πc,ABSA−2
+i
+(·), which
+is any non-injective mapping such that ∀k ∈ {1, ..., |C|} :
+πc,ABSA−2
+i
+(oi) = c(k) iff oi ∈ Pi,k. After obtaining the
+communication policies ⟨πc,ABSA−2
+i
+(·)⟩N
+i=1, to obtain a proper
+control πm(·) policy at the CC corresponding to the com-
+munication policies, we perform a single-agent reinforcement
+learning. To this end and to manage the complexity of the
+algorithm for larger values of d, we propose to use DQN
+architecture [41] at the CC.
+IV. PERFORMANCE EVALUATION
+In this section, we evaluate our proposed schemes via nu-
+merical results for the popular multi-agent geometric consen-
+Algorithm 1.
+Action Based State Aggregation (ABSA-2)
+1: Initialize replay memory D to capacity 10’000.
+2: Initialize state-action value function Q(·) with random
+weights θ.
+3: Initialize target state-action value function Qt(·) with
+weights θt = θ.
+4: Obtain π∗(·) and Q∗(·) by solving (2) using Q-learning
+[40]*, where R >> H(oi(t)) ∀i ∈ N.
+5: Compute π∗
+i (oi(t)) = Mode
+�
+m∗
+i |oi(t)
+�
+, for ∀oi(t) ∈ Ω,
+for i ∈ N.
+6: Solve problem (5) by applying k-median clustering to
+obtain Pi and πc
+i (·) , for i ∈ N.
+7: for each episode k = 1 : 200’000 do
+8:
+Randomly initialize observation oi(t = 0), for i ∈ N
+9:
+Randomly initialize the message c(t = 0)
+10:
+for t = 1 : T ′ do
+11:
+Select ci(t), at agent i, following πc
+i (·), for i ∈ N
+12:
+Obtain the message ⟨c1(t), ..., cN(t)⟩ at the CC
+13:
+Follow ϵ-greedy, at CC, to generate the action
+mi(t), for i ∈ N
+14:
+Obtain reward r(t) = R
+�
+s(t), m(t)
+�
+at the CC
+15:
+Store the transition
+�
+c(t), m(t), r(t), c(t + 1)
+�
+in D
+16:
+t ← t + 1
+17:
+end
+18:
+Sample D′ =
+�
+c(t′), m(t′), r(t′), c(t′ +1)
+�t′=t′
+62
+t′=t′
+1
+from D
+19:
+for each transition t′ = t′
+1 : t′
+62 of the mini-batch D′ do
+20:
+Compute DQN’s average loss Lt′(θ) =
+1
+2
+�
+r(t′) +
+max
+m∗ Qt�
+c(t′ + 1), m∗, θt�
+− max
+m∗ Q
+�
+c(t′), m∗, θ
+��2
+,
+21:
+Perform a gradient descent step on Lt′(θ) w.r.t θ
+22:
+end
+23:
+Update the target network Qt(·) every 1000 steps
+24: end
+
+8
+sus problem9. Through indirect design, ABSA-1 and ABSA-
+2 never rely on explicit domain knowledge about any spe-
+cific task, such as geometric consensus. Thus, we conjecture
+that their indirect design allows them to be applied beyond
+geometric consensus problems and to a much wider range
+of tasks. To make the geometric consensus task suitable for
+the evaluation of our proposed algorithms, similar to [18],
+we have introduced a bit constraint to the communication
+channel between the agents and the CC. After evaluating the
+proposed algorithms in the context of the rendezvous problem,
+we attempt to explain the behaviour of all the algorithms via
+the existing metric - positive listening - for measuring the task-
+effectiveness of communications. As positive listening falls
+short in explaining all the aspects of the behaviour of the
+investigated algorithms, we will also introduce a new metric.
+Called task relative information, the new metric assists to
+further explain the behaviour of different algorithms with a
+higher accuracy and reliability.
+A. The geometric consensus problem
+Our proposed schemes are evaluated in this section through
+numerical results for the rendezvous problem [42], [43],
+which is a specific type of geometric consensus problems
+under finite observability [28]. Following the instantaneous
+and synchronous communication model and the star network
+topology explained in section II-A and Fig. 2 respectively,
+the rendezvous problem is explained as the following. At
+each time step t several events happen in the following order.
+First, an agent i obtains a local observation oi(t) - which is
+equivalent to its own location in the grid-world. The agent i,
+subsequently, follows its quantization/communication policy
+to generate a compressed version ci(t) of its observation to
+be communicated to the CC via bit-budgeted communication
+links. After receiving the quantized observations of all agents,
+CC follows its control policy to decide and select the joint
+action vector m(t) and communicate each agent i’s local action
+mi(t) to it accordingly. The local action mi(t) ∈ M that is
+communicated back to the agent i via a perfect communication
+channel is a one directional move in the greed world, i.e,
+M = { left, right, up, down, pause}. Given each agent i’s
+action mi(t) the environment evolves and transitions to the
+next time step t + 1 where each agent i obtains a new local
+observation oi(t + 1). All agents receive a single team reward
+rt =
+�
+�
+�
+�
+�
+C1,
+if ∃ i, j ∈ N : oi(t) ∈ ΩT & oj(t) /∈ ΩT
+C2,
+if ∄ i ∈ N : oi(t) ∈ Ω − ΩT ,
+0,
+otherwise,
+(8)
+where C1 < C2 and ΩT is the set of terminal observations i.e.,
+the episode terminates if ∃ i ∈ N : oi(t) ∈ ΩT . Accordingly,
+when not all agents arrive at the target point, a smaller reward
+C1 = 1 is obtained, while the larger reward C2 = 10 is
+attained when all agents visit the goal point at the same time.
+9In our numerical experiments, the discount factor is assumed to be γ =
+0.9. All experiments are done over a grid world of size 8×8, where the goal
+point of the rendezvous is located at the grid number ΩT = {22}.
+We compare our proposed ABSA algorithms with the heuristic
+non-communicative (HNC), heuristic optimal communication
+(HOC) and SAIC algorithms proposed in [18] which are direct
+schemes to jointly design the communication and control
+policies for the specific geometric consensus problem solved
+here. In contrast to ABSA-1 and ABSA-2 which enjoy an
+indirect design, the direct design of HOC and HNC does
+not allow them to be applied in any other problem rather
+than the specific geometric consensus problem with the finite
+observability i.e., the rendezvous problem explained here.
+B. Numerical experiment
+A constant learning rate α = 0.07 is applied when exact Q-
+learning is used to obtain π∗(·) and α = 0.0007 when DQN
+is used to learn πm(·) for ABSA-2. For the exact Q-learning,
+a UCB10 exploration rate of c = 1.25 considered. The deep
+neural network that approximates the Q-values is considered
+to be a fully connected feed-forward network with 10 layers
+of depth, which is optimized using the Adam optimizer. An
+experience reply buffer of size 10’000 is used with the mini-
+batch size of 62. The target Q-network is updated every 1000
+steps and for the exploration, decaying ϵ-greedy with the initial
+ϵ = 0.05 and final ϵ = 0.005 is used [41]. In any figure
+that the performance of each scheme is reported in terms
+of the averaged discounted cumulative rewards, the attained
+rewards throughout training iterations are smoothed using a
+moving average filter of memory equal to 20,000 iterations.
+As explained in section III-A, ABSA-1 and ABSA-2 both
+require a centralized training phase prior to be capable of being
+executed in a distributed fashion.
+For all black curves, one prior centralized training phase
+to obtain π∗(·) is required. As detailed in Section III, the
+proposed algorithms, ABSA-1 and ABSA-2, leverage π∗(·)
+to design πc and then πm afterwards. Dashed curves, HOC
+and HNC, as proposed by [18] provide heuristic schemes
+which exploit the domain knowledge of its designer about
+the rendezvous task making it not applicable to any other
+task rather than the rendezvous problem. While HOC enjoys
+a joint control and communication design, HNC runs with no
+communication. Note that HNC & HOC require communica-
+tion/coordination between agents prior to the starting point of
+the task - which is not required for any other scheme. These
+schemes, introduced by [18], are detailed as the following.
+• A joint communication and control policy is designed
+using domain knowledge in the rendezvous problem.
+HNC agents approach the goal point and wait nearby
+for a sufficient number of time steps to ensure that the
+other agent has also arrived. Only after that, they will
+get to the goal point. Note that this scheme requires
+communication/coordination between agents prior to the
+starting point of the task, since they have to have had
+agreed upon this scheme of coordination.
+• A joint communication and control policy is designed
+using domain knowledge in the rendezvous problem.
+10UCB is a standard scheme used in exact reinforcement learning to strike
+a trade-off between the exploration and exploitation [40].
+
+9
+0
+2
+4
+6
+8
+10
+12
+14
+16
+18
+Training Iterations
+104
+0
+0.5
+1
+1.5
+2
+2.5
+3
+3.5
+4
+4.5
+5
+Average Return
+Figure 5.
+Average return comparison made between the proposed schemes
+and some benchmarks introduced in [18] - the three agent scenario under
+constant bit-budget values.
+HOC agents wait next to the goal point until the other
+agent informs them that they have also arrived there.
+Only after that, they will get to the goal point. Note
+that this scheme requires communication/coordination
+between agents prior to the starting point of the task,
+since they have to have had agreed upon this scheme of
+coordination and communications as well as on the the
+meaning that each communication message entails.
+To obtain the results demonstrated in Fig. 5, we have
+simulated the rendezvous problem for a three-agent system.
+The black curves illustrate the training phase that is occurring
+at CC to obtain πm after πc is already computed using
+equations (5) and (6). We observe the lossless performance
+of ABSA-1 in achieving the optimal average return without
+requiring any (2nd round) training. To enable fully decen-
+tralized quantization of the observation process, ABSA-2 was
+proposed which is seen to approach the optimal solution as
+d grows. All ABSA-2 curves are plotted with |C| = 3, and
+ABSA-1 curve is plotted with |C| = |M|N = 125 in 3 agent
+scenarios - Fig. 5 - and |C| = |M|N = 25 in the two agent
+scenario - Fig. 6.
+In Fig. 5, we see how the performance of ABSA-2 compares
+with HNC, HOC and SAIC at different rates of quantization.
+As expected, with the increase in the size of the quantiza-
+tion codebook, the average return performance of ABSA-2
+is gradually improved, such that it approaches near-optimal
+performance at d = 3. We also observe the superior per-
+formance of ABSA-2 compared with SAIC at very tight bit-
+budgets where SAIC’s performance sees a drastic drop. It is
+observed that as d grows, ABSA-2 approaches the optimal
+return performance even under higher rates of quantization,
+however, higher values of d come at the cost of the increased
+computational complexity of ABSA-2.
+C. Explainablity of the learned communication policies
+One common metric to evaluate the effectiveness of
+communications in the literature [37] is positive listening
+I
+�
+ci(t); mj(t)
+�
+j ∈ N −{i}, which is the mutual information
+2
+2.5
+3
+3.5
+4
+4.5
+5
+Size of the quantization codebook
+0.2
+0.4
+0.6
+0.8
+1
+Normalized Average Return
+Figure 6.
+The obtained normalized average return as a function of codebook
+size |C| is compared across a range of schemes: proposed schemes and some
+benchmarks introduced in [18] - two-agent scenario.
+between the communication ci(t) produced by an agent i
+and the action mj(t) selected by another agent following
+the receipt of the communication ci(t) from agent i. Positive
+signaling I
+�
+oi(t); ci(t)
+�
+is another metric proposed by [37],
+measuring the mutual information between agent i’s observa-
+tion oi(t) and its own produced communication message ci(t)
+at the same time step. As to be shown below, however, these
+metrics are unable to fully capture the underlying performance
+trends of all schemes. Therefore, we, for the first time,
+introduce a new metric called task relevant information (RI)
+- allowing us to explain the task-effectiveness of the learned
+communication policies.
+Measuring positive listening is one way to quantify the
+contribution of the communicated messages of agent i to the
+action selection of agent j. Positive signalling, on the other
+hand, measures the consistency as well as the relevance of the
+communicated messages ci(t) and the agent’s observations
+oi(t). As SAIC and ABSA use a deterministic mapping of
+observation oi to produce the communication message ci, they
+are always guaranteed to have positive signalling [37] - the
+degree of which, however, is limited by the uplink channel’s bit
+budget R = log2 |C|. Thus, among the existing metrics for the
+measurement of the effectiveness of communications, we limit
+our numerical studies to the measurement of positive listening.
+It is known that the higher positive listening is, the stronger
+(not necessarily better) we expect the coordination between
+the agents to be. That is, the higher positive listening means
+higher degree of dependence between agents (their actions and
+observations) which is not necessarily sufficient for the team
+agents to fulfill the task.
+Figure 7 explains how stronger coordination between agents
+and the CC is often resulting in an increased performance of
+the MAS in obtaining a higher average return. For instance,
+the enhancement in the positive-listening performance of SAIC
+from |C| = 3 to |C| = 4 quantizer in Fig. 7 is resulting in an
+improved average return performance, as shown in Fig. 6. This
+metric also reasonably explains the enhancement of ABSA-2
+performance in obtaining higher return by increasing d - the
+memory of the CC - and the size of the quantization codebook
+
+10
+2
+2.5
+3
+3.5
+4
+4.5
+5
+Size of the quantization codebook
+0.1
+0.2
+0.3
+0.4
+0.5
+0.6
+0.7
+Positive Listening (bits)
+Figure 7.
+Comparing the positive listening I
+�
+ci(t); mj(t)
+�
+performance
+across a range of schemes.
+|C|. Moreover, stronger coordination between agents and CC
+is visible in ABSA-2 when compared with HOC. Thus, we
+expect better average return performance for ABSA-2 which
+is in contrast to the results of Fig. 5. This event suggests
+that stronger coordination - measured by positive listening
+- may not necessarily result in an improved average return
+performance as the coordination may not be perfectly aligned
+with task needs.
+The curve concerning the HOC scheme allows us to recall
+that a positive listening of 0.3 (bit) is sufficient to maintain
+the coordination required for optimal performance in the afore-
+mentioned geometric consensus task. Therefore, in the ABSA-
+2 and SAIC schemes, there is still an unnecessary influence
+from the side of the communication messages to the actions
+selected by the receiving end. In fact, not all the information
+received from the receiving end has contributed to the higher
+average return of the system. Accordingly, there is yet, some
+unnecessary data in the communication messages designed by
+ABSA that contain no task-specific/useful information.
+Thus we believe that positive listening cannot explicitly
+quantify the effectiveness of the task-oriented communica-
+tion algorithms; therefore they fall short in explaining the
+behaviour of these algorithms. Even when positive listening is
+computed as I (ci(t); m(t)) to capture the mutual information
+between the communication of agent i and the control signals
+of all agents we arrive at almost similar patterns - Fig. 8.
+Figure 9, investigates the performance of multiple schemes
+via a novel performance metric: task relevant information
+(TRI). Here we define the task relevant information metric
+to be
+I
+�
+πc�
+oi(t)
+�
+; π∗�
+s(t)
+��
+= I
+�
+ci(t); m∗(t)
+�
+,
+(9)
+which measures the mutual information (in bits) between the
+communicated message of agent i and the vector m∗(t) of
+joint optimal actions at the CC - which is selected by the
+optimal centralized control policy π∗(·). As demonstrated
+by Fig. 9, TRI is an indirect metric of the effectiveness of
+communications that can explain the behaviour of different
+2
+2.5
+3
+3.5
+4
+4.5
+5
+Size of the quantization codebook
+0.2
+0.4
+0.6
+0.8
+1
+1.2
+Positive Listening (bits)
+Figure 8.
+Comparing the positive listening I (ci(t); m(t)) performance
+across a range of schemes.
+communication designs. It is also observed that the TRI metric
+magnifies the performance gap between different schemes
+as they get closer to the optimal performance. Nevertheless,
+TRI can be utilized as a standalone measure to quantify
+the effectiveness of a communication design since it almost
+perfectly predicts the average return performance of the a com-
+munication policy - without the need for the communication
+to be tested when solving the real task.
+Note that, we measure the task-effectiveness of a quan-
+tization algorithm based on the average return that can be
+obtained when using it. Further, to measure the average
+return that can be obtained under the communication poli-
+cies ⟨πc
+1(·), ..., πc
+N(·)⟩, we have to design the control policy
+πm(·) at the CC that selects the control vector m(t) having
+access to only the quantized observations of the agents c(t).
+Accordingly, we cannot measure the effectiveness of the
+communication policy of an MAS without having a specific
+design for their control policy. Even after the design of the
+control policy of the MAS, it is challenging to understand if
+the suboptimal performance of the algorithm is caused by an
+ineffective design of the control policy or the communication
+policy. In fact, it is hard disentangle the effect of the control
+and communication policies on the MAS’s average return. Our
+proposed metric TRI can facilitate measuring the performance
+of any communication policy in isolation and without the
+effect of the control policy being present in the numerical
+values of TRI.
+Accordingly, the importance of introducing this metric is
+multi-fold: (i) by using TRI as an indirect metric we can
+measure the effectiveness of a communication policy for any
+specific task; (ii) it allows us to measure the effectiveness of
+the communication scheme prior to the design of any control
+policy; (iii) it helps to design task effective communication
+policies in complete separation from the control policy design.
+V. CONCLUSION
+In this paper, we have investigated the joint design of control
+and communications in an MAS under centralized control
+and distributed communication policies. We first proposed
+an action-based state aggregation algorithm (ABSA-1) for
+
+11
+2
+2.5
+3
+3.5
+4
+4.5
+5
+Number of Communication Symbols
+0
+0.1
+0.2
+0.3
+0.4
+0.5
+ABSA-2 d=1
+ABSA-2 d=2
+ABSA-2 d=3
+SAIC d=1
+HOC d=1
+Figure 9. Comparing the task relevant information (TRI) performance across
+a range of schemes. It is observed that TRI can comprehensively explain the
+behaviour of all task-effective quantization schemes in a certain task without
+the need to measure their effectiveness via their resulting average return in
+the task - compare this figure with Fig. 6 .
+lossless compression and provided analytical proof of its
+optimality. Then we proposed ABSA-2, which offers a fully
+distributed communication policy and can trade computational
+complexity for communication efficiency. We finally demon-
+strated the task-effectiveness of the proposed algorithms via
+numerical experiments performed on a geometric consensus
+problem via a number of representative metrics. Furthermore,
+our numerical studies demonstrate the pressing need for further
+research on finding a metric that can measure/explain the
+task-effectiveness of communications with more accuracy.
+And, scalability in task-oriented design is yet another central
+challenge to be addressed in future research.
+APPENDIX A
+PROOF OF LEMMA 1
+Proof. Applying Adam’s law on equation (2) yields
+argmax
+π
+Ep(c(t))
+�
+Epπc,πm ({tr}T ′
+t′ |c(t))
+�
+g(t′)|c(t)
+��
+, s.t. |C| ≤ 2R
+(10)
+where c(t) is generated by the communication policy πc and
+the joint pmf of the system’s trajectory {tr}T ′
+t′
+is directly
+influenced by the action policy πm. The conditional pmf
+pπc,πm({tr}T ′
+t′ |c(t)) is the joint probability of the trajectory
+of the system given the received communication c(t) when
+policies πc(·) and πm(·) are followed. We proceed by negating
+the equation (10) and adding a second term to the objective
+function which is constant with respect to the decision vari-
+ables of the problem to have
+argmin
+πc
+Ep(s(t))
+�
+Epπ∗ ({tr}T ′
+t′ |s(t))
+�
+g(t′)|s(t)
+��
+−
+(11)
+Ep(c(t))
+�
+Epπc,πm ({tr}T ′
+t′ |c(t))
+�
+g(t′)|c(t)
+��
+, s.t. |C| ≤ 2R.
+We replace the conditional expectation of system return by
+the value function V (·), [40](Ch. 3.5), and we will have
+argmin
+πc
+Ep(s(t))
+�
+V π∗�
+s(t)
+��
+− Ep(c(t))
+�
+V πm�
+c(t)
+��
+,
+s.t.
+|C| ≤ 2R.
+(12)
+Note that the empirical joint distribution of c(t) can be
+obtained by following the communication policy πc on the
+empirical distribution of s(t).
+argmin
+πc
+Ep(s(t))
+�
+V π∗�
+s(t)
+��
+− Ep(s(t))
+�
+V πm�
+c(t)
+��
+,
+s.t.
+|C| ≤ 2R.
+(13)
+As V π∗�
+s(t)
+�
+− V πm�
+c(t)
+�
+≥ 0 is true for any s(t) ∈ S,
+merging the two expectations results in
+argmin
+πc
+Ep(s(t))
+���V π∗�
+s(t)
+�
+− V πm�
+c(t)
+����,
+s.t. |C| ≤ 2R,
+(14)
+which concludes the proof of the lemma.
+■
+APPENDIX B
+PROOF OF LEMMA 2
+Proof. We depart from the result of lemma 1 - problem (3).
+By taking the expectation over the empirical distribution of
+s(t) and applying Bellman optimality equation, we obtain
+argmin
+π
+1
+n
+n
+�
+t=1
+���Qπ∗�
+s(t), π∗(s(t))
+�
+−Qπm�
+c(t), πm�
+πc(s(t))
+�����,
+s.t.
+|C| ≤ 2R,
+(15)
+where the vector πc(s(t)) is of N dimensions and its i-th
+element is ci(t). We proceed by plugging πc,ABSA−1(·) and
+Πm∗, according to the definition 1, into the equation (15) to
+obtain
+1
+n
+n
+�
+t=1
+���Qπ∗�
+s(t), π∗(s(t))
+�
+− Qπ∗�
+c(t), π∗�
+s′�����,
+(16)
+where s′ = πc,ABSA−1−1�
+πc,ABSA−1�
+s(t)
+��
+, and any pos-
+sible value for it lies in the same subset Pk′ as s(t) does, while
+according to the definition of Pk′, we know π∗(s(t)) = π∗(s′),
+if |C| ≥ |M|N. Thus, by replacing π∗(s′) in with π∗(s(t)) in
+equation (17) we get
+1
+n
+n
+�
+t=1
+���Qπ∗�
+s(t), π∗(s(t))
+�
+− Qπ∗�
+s(t), π∗�
+s(t)
+����� = 0.
+(17)
+This concludes the proof of theorem 2.
+■
+REFERENCES
+[1] L. S. Vailshery, “Number of internet of things (iot) connected devices
+worldwide from 2019 to 2021, with forecasts from 2022 to 2030,” Aug
+2022. [Online]. Available: https://www.statista.com/statistics/1183457/
+iot-connected-devices-worldwide/
+[2] B. G¨uler, A. Yener, and A. Swami, “The semantic communication
+game,” IEEE Transactions on Cognitive Communications and Network-
+ing, vol. 4, no. 4, pp. 787–802, 2018.
+[3] H. Tong, Z. Yang, S. Wang, Y. Hu, W. Saad, and C. Yin, “Federated
+learning based audio semantic communication over wireless networks,”
+in 2021 IEEE Global Communications Conference (GLOBECOM),
+2021, pp. 1–6.
+[4] N. Pappas and M. Kountouris, “Goal-oriented communication for real-
+time tracking in autonomous systems,” in 2021 IEEE International
+Conference on Autonomous Systems (ICAS), 2021, pp. 1–5.
+[5] E. Calvanese Strinati and S. Barbarossa, “6g networks: Beyond shannon
+towards semantic and goal-oriented communications,” Computer Net-
+works, vol. 190, p. 107930, 2021.
+[6] A. Mostaani, T. X. Vu, S. K. Sharma, Q. Liao, and S. Chatzinotas,
+“Task-oriented communication system design in cyber-physical systems:
+A survey on theory and applications,” arXiv preprint arXiv:2102.07166,
+2021.
+
+12
+[7] J. Foerster, Y. Assael, N. de Freitas, and S. Whiteson, “Learning to
+communicate with deep multi-agent reinforcement learning,” in Proc.
+Advances in Neural Information Processing Systems, Barcelona, 2016.
+[8] C. E. Shannon and W. Weaver, “The mathematical theory of communi-
+cation [1949]. urbana, il,” 1959.
+[9] L. Hu, G. Wu, Y. Xing, and F. Wang, “Things2vec: Semantic modeling in
+the internet of things with graph representation learning,” IEEE Internet
+of Things Journal, vol. 7, no. 3, pp. 1939–1948, 2020.
+[10] J. Cai, W. Zhong, and J. Luo, “Seminer: Side-information-based seman-
+tics miner for proprietary industrial control protocols,” IEEE Internet of
+Things Journal, vol. 9, no. 22, pp. 22 796–22 810, 2022.
+[11] T.-Y. Tung, S. Kobus, J. P. Roig, and D. G¨und¨uz, “Effective communi-
+cations: A joint learning and communication framework for multi-agent
+reinforcement learning over noisy channels,” IEEE Journal on Selected
+Areas in Communications, vol. 39, no. 8, pp. 2590–2603, 2021.
+[12] M. P. Mota, A. Valcarce, J.-M. Gorce, and J. Hoydis, “The emergence of
+wireless mac protocols with multi-agent reinforcement learning,” arXiv
+preprint arXiv:2108.07144, 2021.
+[13] N. Shlezinger and Y. C. Eldar, “Deep task-based quantization,” Entropy,
+vol. 23, no. 1, p. 104, 2021.
+[14] M. A. Gutierrez-Estevez, Y. Wu, and C. Zhou, “Learning to commu-
+nicate with intent: An introduction,” arXiv preprint arXiv:2211.09613,
+2022.
+[15] C. Zhang, H. Zou, S. Lasaulce, W. Saad, M. Kountouris, and M. Bennis,
+“Goal-oriented communications for the iot and application to data
+compression,” arXiv preprint arXiv:2211.05378, 2022.
+[16] N. Shlezinger and Y. C. Eldar, “Task-based quantization with application
+to mimo receivers,” arXiv preprint arXiv:2002.04290, 2020.
+[17] A. Mostaani, O. Simeone, S. Chatzinotas, and B. Ottersten, “Learning-
+based physical layer communications for multiagent collaboration,”
+in 2019 IEEE Intl. Symp. on Personal, Indoor and Mobile Radio
+Communications, Sep. 2019.
+[18] A. Mostaani, T. X. Vu, S. Chatzinotas, and B. Ottersten, “Task-oriented
+data compression for multi-agent communications over bit-budgeted
+channels,” IEEE Open Journal of the Communications Society, vol. 3,
+pp. 1867–1886, 2022.
+[19] M. Kountouris and N. Pappas, “Semantics-empowered communication
+for networked intelligent systems,” IEEE Communications Magazine,
+vol. 59, no. 6, pp. 96–102, 2021.
+[20] R. Carnap, Y. Bar-Hillel et al., “An outline of a theory of semantic
+information,” 1952.
+[21] H. Zhang, S. Shao, M. Tao, X. Bi, and K. B. Letaief, “Deep learning-
+enabled semantic communication systems with task-unaware transmitter
+and dynamic data,” arXiv preprint arXiv:2205.00271, 2022.
+[22] P. A. Stavrou and M. Kountouris, “A rate distortion approach to goal-
+oriented communication,” in 2022 IEEE International Symposium on
+Information Theory (ISIT).
+IEEE, 2022, pp. 590–595.
+[23] A. Mostaani, T. X. Vu, S. Chatzinotas, and B. Ottersten, “State ag-
+gregation for multiagent communication over rate-limited channels,”
+in GLOBECOM 2020-2020 IEEE Global Communications Conference.
+IEEE, 2020, pp. 1–7.
+[24] D. Kim, S. Moon, D. Hostallero, W. J. Kang, T. Lee, K. Son, and Y. Yi,
+“Learning to schedule communication in multi-agent reinforcement
+learning,” in Intl. Conf. on Learning Representations, 2019.
+[25] J. Liu, S. Shao, W. Zhang, and H. V. Poor, “An indirect rate-distortion
+characterization for semantic sources: General model and the case of
+gaussian observation,” arXiv preprint arXiv:2201.12477, 2022.
+[26] C.-M. Chou, C.-Y. Li, W.-M. Chien, and K.-c. Lan, “A feasibility study
+on vehicle-to-infrastructure communication: Wifi vs. wimax,” in 2009
+tenth international conference on mobile data management: systems,
+services and middleware.
+IEEE, 2009, pp. 397–398.
+[27] Y.-C. Liu, J. Tian, C.-Y. Ma, N. Glaser, C.-W. Kuo, and Z. Kira,
+“Who2com: Collaborative perception via learnable handshake commu-
+nication,” in 2020 IEEE International Conference on Robotics and
+Automation (ICRA).
+IEEE, 2020, pp. 6876–6883.
+[28] A. Barel, R. Manor, and A. M. Bruckstein, “Come together: Multi-agent
+geometric consensus,” arXiv preprint arXiv:1902.01455, 2017.
+[29] S. Tatikonda and S. Mitter, “Control under communication constraints,”
+IEEE Transactions on automatic control, vol. 49, no. 7, pp. 1056–1068,
+2004.
+[30] J. N. Foerster, G. Farquhar, T. Afouras, N. Nardelli, and S. Whiteson,
+“Counterfactual multi-agent policy gradients,” in Thirty-Second AAAI
+Conference on Artificial Intelligence, 2018.
+[31] F. A. Oliehoek, C. Amato et al., A concise introduction to decentralized
+POMDPs.
+Springer, 2016, vol. 1.
+[32] Z. Ding, W. Hong, L. Zhu, T. Huang, and Z. Lu, “Sequential commu-
+nication in multi-agent reinforcement learning,” 2021.
+[33] J. Albowicz, A. Chen, and L. Zhang, “Recursive position estimation
+in sensor networks,” in Proceedings Ninth International Conference on
+Network Protocols. ICNP 2001.
+IEEE, 2001, pp. 35–41.
+[34] S. Dorvash and S. Pakzad, “Stochastic iterative modal identification al-
+gorithm and application in wireless sensor networks,” Structural Control
+and Health Monitoring, vol. 20, no. 8, pp. 1121–1137, 2013.
+[35] D. V. Pynadath and M. Tambe, “The communicative multiagent team
+decision problem: Analyzing teamwork theories and models,” Journal
+of Artificial Intelligence Research, vol. 16, pp. 389–423, Jun. 2002.
+[36] F. A. Oliehoek, M. T. Spaan, N. Vlassis et al., “DEC-PoMDPs with
+delayed communication,” in Proc. Multi-agent Sequential Decision-
+Making in Uncertain Domains, Honolulu, Hawaii, May 2007.
+[37] R. Lowe, J. Foerster, Y.-L. Boureau, J. Pineau, and Y. Dauphin, “On
+the pitfalls of measuring emergent communication,” in Intl. Conf. on
+Autonomous Agents and MultiAgent Systems, 2019.
+[38] L. Li, T. J. Walsh, and M. L. Littman, “Towards a unified theory of state
+abstraction for mdps.” in AI&M, 2006.
+[39] A. K. McCallum, Reinforcement learning with selective perception and
+hidden state.
+University of Rochester, 1996.
+[40] R. S. Sutton and A. G. Barto, Introduction to reinforcement learning,
+2nd ed.
+MIT Press, Nov. 2017, vol. 135.
+[41] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G.
+Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski
+et al., “Human-level control through deep reinforcement learning,”
+nature, vol. 518, no. 7540, pp. 529–533, 2015.
+[42] P. Xuan, V. Lesser, and S. Zilberstein, “Communication decisions in
+multi-agent cooperation: Model and experiments,” in Proceedings of the
+Fifth International Conference on Autonomous Agents, ser. AGENTS
+’01. New York, NY, USA: Association for Computing Machinery, 2001,
+p. 616–623. [Online]. Available: https://doi.org/10.1145/375735.376469
+[43] C. Amato, J. S. Dibangoye, and S. Zilberstein, “Incremental policy
+generation for finite-horizon dec-pomdps,” in Nineteenth International
+Conference on Automated Planning and Scheduling, 2009.
+
diff --git a/ndAzT4oBgHgl3EQfqf0O/content/tmp_files/load_file.txt b/ndAzT4oBgHgl3EQfqf0O/content/tmp_files/load_file.txt
new file mode 100644
index 0000000000000000000000000000000000000000..f60124dcaea2813e225ecdac63efcae5cdfa414a
--- /dev/null
+++ b/ndAzT4oBgHgl3EQfqf0O/content/tmp_files/load_file.txt
@@ -0,0 +1,913 @@
+filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf,len=912
+page_content='1 Task-Effective Compression of Observations for the Centralized Control of a Multi-agent System Over Bit-Budgeted Channels Arsham Mostaani, Student Member, IEEE, Thang X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Vu, Senior Member, IEEE, Symeon Chatzinotas, Fellow Member, IEEE, and Bj¨orn Ottersten, Fellow Member, IEEE Abstract—We consider a task-effective quantization problem that arises when multiple agents are controlled via a centralized controller (CC).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' While agents have to communicate their obser- vations to the CC for decision-making, the bit-budgeted commu- nications of agent-CC links may limit the task-effectiveness of the system which is measured by the system’s average sum of stage costs/rewards.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' As a result, each agent should compress/quantize its observation such that the average sum of stage costs/rewards of the control task is minimally impacted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' We address the problem of maximizing the average sum of stage rewards by proposing two different Action-Based State Aggregation (ABSA) algorithms that carry out the indirect and joint design of control and communication policies in the multi-agent system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' While the applicability of ABSA-1 is limited to single-agent systems, it provides an analytical framework that acts as a stepping stone to the design of ABSA-2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' ABSA-2 carries out the joint design of control and communication for a multi-agent system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' We evaluate the algorithms - with average return as the performance metric - using numerical experiments performed to solve a multi-agent geometric consensus problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' The numerical results are concluded by introducing a new metric that measures the effectiveness of communications in a multi-agent system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Index Terms—Semantic communications, task-effective data compression, goal-oriented communications, communications for machine learning, multi-agent systems, reinforcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' INTRODUCTION As 5G is rolling out, a wave of new applications such as the internet of things (IoT), industrial internet of things (IIoT) and autonomous vehicles is emerging.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' It is projected that by 2030, approximately 30 billion IoT devices will be connected [1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' With the proliferation of non-human types of connected devices, the focus of the communications design is shifting from traditional performance metrics, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=', bit error rate and latency of communications to the semantic and task-oriented performance metrics such as meaning/semantic error rate [2], [3] and the timeliness of information [4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' To evaluate how efficiently the network resources are being utilized, one could traditionally measure the sum rate of a network whereas in the era of the cyber-physical systems, given the resource constraints of the network, we want to understand how effectively one can conduct a (number of) task(s) in the desired way [5], [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' We are witnessing a paradigm shift in communication systems where the targeted performance metrics of the traditional systems are no longer valid.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' This imposes new grand challenges in designing the communications towards the eventual task-effectiveness [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' The authors are with the Centre for Security Reliability and Trust, Uni- versity of Luxembourg, Luxembourg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Emails: {arsham.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='mostaani, thang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='vu, symeon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='chatzinotas, bjorn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='ottersten}@uni.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='lu This work is supported by European Research Council (ERC) via the project AGNOSTIC (Grant agreement ID: 742648).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Environment Environment Controller 2 Control Control Controller 2 Sensor 1 Sensor 2 Comm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Comm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Local Observation Local Observation Reward/ cost Reward/ cost a) b) Local Observation Stage reward/ cost Local Observation Stage reward/ cost Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Task-effective communications for a) an estimation vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' b) a control task - the orange dashed box is detailed in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 2 and Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' This line of research is also driven partly due to the success of new machine learning technologies/ algorithms under the title of ”emergent communications” in multi-agent systems [7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Transfer of these new technologies/ideas to communication en- gineering is anticipated to have a disruptive effect in multiple domains of the design of communication systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' According to Shannon and Weaver, communication prob- lems can be divided into three levels [8]: (i) technical problem: given channel and network constraints, how accurately can the communication symbols/bits be transmitted?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' (ii) semantic problem: given channel and network constraints, how accu- rately the communication symbols can deliver the desired meaning?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' (iii) effectiveness problem: given channel and net- work constraints, how accurately the communication symbols can help to fulfil the desired task?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' While the traditional com- munication design addresses the technical problem, recently, the semantic problem [2], [3], [5], [9], [10] as well as the effectiveness problem [6], [11]–[18] have attracted extensive research interest.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' In contrast to Shannon’s technical-level communication framework, semantic communication can enhance perfor- mance by exploiting prior knowledge between source and destination [4], [19].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' The semantic-based designs, however, are not necessarily task-effective [20].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' One can design transmitters which compress the data with the least possible compromise on the semantic meaning being transmitted [2], [3] while the transmission can be task-unaware [21].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' In contrast to semantic level and technical level communication design, the performance of a task-effective communication system is ultimately measured in terms of the average return/cost linked to the task [11].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' In the (task-)effectiveness problem, we are not concerned only about the communication of meaning but arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='01628v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='IT] 4 Jan 2023 2 also about how the message exchange is helping the receiving end to improve its performance in the expected cost/reward of an estimation task [4], [13], [14], [16], [22] or a control task [11], [12], [14], [17], [18], [23], [24].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' There are fundamental differences between the design of task-effective communications for an estimation vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' a control task - Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' (i) In the latter, each agent can produce a control signal that directly affects the next observations of the agent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Thus, in control tasks the source of information - local observations of the agent - is often a stochastic process with memory - e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' linear or Markov decision processes - [11], [17], [18].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' In the estimation tasks, however, the source of information is often assumed to be an i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' stochastic process [13], [16], [22].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' (ii) In the control tasks, a control signal often has a long-lasting effect on the state of the system more than for a single stage/time step e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=', a control action can result in lower expected rewards in the short run but higher expected rewards in the long run.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' This makes the control tasks intrinsically sensitive to the time horizon for which the control policies are designed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Estimation tasks, specifically when the observation process is i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=', can be solved in a single stage/ time step - since there is no influence from the solution of one stage/ time step to another i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=', each time step can be solved separately [22], [25].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' (iii) The cost function for estimation tasks is often in the form of a difference/distortion function while in the control tasks it can take on many other forms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' In this paper, we focus on the effectiveness problem for the control tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' In particular, we investigate the distributed communication design of a multiagent system (MAS) with the ultimate goal of maximizing the expected summation of per-stage rewards also known as the expected return.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Multiple agents select control actions and communicate in the MAS to accomplish a collaborative task with the help of a central controller (CC) - i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' the communication network topology of the MAS is a star topology with the hub node being the central controller and the peripheral nodes being the agents - Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' The considered system architecture can find applications in several domains such as Internet of Things, emerging cyber- physical systems, real-time interactive systems, vehicle-to- infrastructure communication [26] and collaborative percep- tion [27].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Related works: Task-effective communications for control tasks Authors in [11], [12], [14], [17], [18], [23], [24] consider task-effective communication design under different settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' While [12], utilizes the task-effective communication design for the specific problem of the design of application-tailored protocols over perfect communication channels, the communi- cation channel is considered to be imperfect in [11], [14], [17], [18], [23], [24].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Authors in [14] provide algorithmic contribu- tions to the design of task-effective joint source channel coding for single agent systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Task-effective joint source and chan- nel coding for MAS is targeted by [11], [14], [17], whereas [18], [23] are focused on task-effective data compression and quantization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Similar to the current paper, a star topology 18 4th CET – Arsham Mostaani New Results for the Centralized Architecture Bit-budgeted Com.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Perfect Com.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' a) b) Processing and comm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' power Sensing, actuation, comm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' and processing power Sensing, comm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' and processing power Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Communication topology and its applicable scenarios a) Centralized control of an MAS with collocated actuators and sensors, b) Distributed sensing with a single controller collocated with a single actuator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' The orange dashed box is detailing the same box in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 1 and Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 3 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' for the inter-agent communication is considered in [11], [12] whereas [12] assumes perfect communications between the hub node and the peripherals and [11] assumes imperfect communication channels at the down-link of the peripheral nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' In contrast to all the above-mentioned work, this paper is - to the best of our knowledge - the first to study the star topology with the uplink (agent to hub) channel be imperfect (bit-budgeted) - Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Accordingly, each agent observes the environment and communicates an abstract version of its local observation to the CC via imperfect (bit-budgeted) communication channels - red links in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Subsequently, CC produces control actions that are communicated to the agents via perfect communication channels - black links in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' The control actions are selected by the CC such that they maximize the average return of the collaborative task, where the return is a performance metric linked to the accomplishment of the task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Contributions In our earlier work [18], we have developed a generic framework to solve task-oriented communication problems - for a multi-agent system (MAS) with full mesh connectivity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' The current work can be considered as an adoption of that framework to a new problem setting for the design of task- effective communications where agents follow a star network topology for their connectivity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' In this direction, the current work transcends the applicability of the proposed framework beyond the specific problem that was solved in [18] and provides further insights into how the framework can be used in wider terms and under a wider range of settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' In particular the contributions of this work are listed below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Firstly, we consider a novel problem setting in which an MAS is controlled via a central controller who has access to agents’ local observations only through bit-budgeted distributed communications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' This problem setting can 3 be used in collaboration perception systems as well as vehicle-to-infrastructure communications, which cannot been addressed by the problem settings investigated in the prior similar art.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Secondly, our analytical studies establish the relationship between the considered joint communication and con- trol design problem and conventional data quantization problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' In particular, lemma 1 shows how the problem approached in this paper is a generalized version of the conventional data quantization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' This formulation is useful as it helps to find an exact solution to the problem under stronger conditions via ABSA-1 and under milder conditions via ABSA-2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Moreover, our analytical studies help us to craft an indi- rect 1 task-effective data quantization algorithm - ABSA- 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Designing a task-effective data quantization for ABSA- 2 can equivalently be translated as an indirect approach to feature selection for an arbitrary deep Q-network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Relying on the analysis carried out for ABSA-1, ABSA- 2 designs distributed and bit-budgeted communications between the agents and CC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' ABSA-2 is seen to approach optimal performance by increasing the memory of the CC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' In fact, increasing the memory of CC leads to higher computational complexity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Therefore, ABSA-2 is said to strike a trade-off between computational complexity and task efficiency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Numerical experiments are carried out on a geomet- ric consensus task to evaluate the performance of the proposed schemes in terms of the optimality of the MAS’s expected return in the task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' ABSA-1 and ABSA- 2 are compared with several other benchmark schemes introduced by [18], in a multi-agent2 scenario with local observability and bit-budgeted communications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Finally, we will introduce a new metric, called task rele- vant information, for the measurement of effectiveness in task-oriented communication policies that - in compari- son with the existing metrics such as positive listening and positive signalling - better explains the behaviour of a variety of task-effective communication schemes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' The proposed metric is capable of measuring the effec- tiveness of a task-oriented communication/compression policy without the need of testing a jointly designed control policy and testing the jointly designed policies in the desired task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Technical approach Our goal is to perform an efficient representation of the agents’ local observations to ensure meeting the bit-budget of the communication links while minimizing the effect of 1By an indirect algorithm here we mean an approach that is not dependent on our knowledge from a particular task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Indirect approaches are applicable to any/(wide range of) tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' In contrast to indirect schemes, we have direct schemes that are specifically designed for a niche application [16].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' As defined by [6]: ”the direct schemes aim at guaranteeing or improving the performance of the cyber-physical system at a particular task by designing a task-tailored communication strategy”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 2Due to the complexity related issues explained in section IV, the numerical results are limited to two-agent and three-agent scenarios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' quantization on the average return of the task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' To achieve this, we first need to design task-effective data quantization policies for all agents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' In task-effective data quantization, one needs to take into account the properties of the average return function and the optimal control policies associated with the task [15].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' In addition to the design of the quantization policies for all agents, we also need the control policy of the CC to be capable of carrying out near-optimal decision- making despite its mere access to the quantized messages - resulting in a joint control and data compression problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' We formulate the joint control and data compression problem as a generalized form of data compression: task-oriented data compression (TODC).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Following this novel problem formula- tion, we propose two indirect action-based state aggregation algorithms (ABSA): (i) ABSA-1 provides analytical proof for a task-effective quantization i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='e, with optimal performance in terms of the expected return.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' In this direction, ABSA-1 relaxes the assumption of the lumpability of the underlying MDP, according to which [18][condition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 6], the performance guarantees of the proposed method were established.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Since ABSA-1 is only applicable when the system is composed of one agent and the CC we also propose ABSA-2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Following the analytical results of ABSA-1, given the help of MAP estimation to relax the aforementioned limitation of ABSA-1, and benefiting from a DQN controller at the CC;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' ABSA-2 will be introduced as a more general approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' (ii) ABSA-2 solves an approximated version of the TODC problem and carries out the quantization for any number of agents communicating with the CC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Thanks to a deep Q-network controller utilized at the CC, ABSA-2 can solve more complex problems where the controller benefits from a larger memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Thus, ABSA-2 allows trading complexity for communication efficiency and vice versa.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Finally, we will evaluate the performance of the proposed schemes in the specific task: a geometric consensus problem under finite observability [28].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Organization The rest of this paper is organized as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Section II describes the MAS and states the joint control and commu- nication problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Section III proposes two action-based state aggregation algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Section IV shows the performance of the proposed algorithms in a geometric consensus problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Finally, Section V concludes the paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' For the reader’s convenience, a summary of the notation that we follow in this paper is given in Table I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Bold font is used for matrices or scalars which are random and their realizations follow simple font.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' SYSTEM MODEL AND PROBLEM STATEMENT The problem setting we introduce here can be used to analyse both scenarios illustrated in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Nevertheless, to use our language consistently, we focus on scenario (a) of that figure throughout the manuscript.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' In particular, when we use the term ”agent” we refer to an object which certainly has all the following hardware capabilities: sensing, actuation, communication and data processing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' A MAS, however, may not be comprised of mere agents, but of a combination 4 of agents and perhaps other objects that has at least the hardware capabilities for communication and data processing power.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' The central controller here is supposed to have the hardware capability to process relatively larger data as well as the capability of communications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' The interactions inside the MAS and outside the MAS with the environment are illustrated in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' System model We consider a MAS in which multiple agents i ∈ N = {1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=', N} collaboratively solve a task with the aid of a CC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Following a centralized action policy, CC provides the agents with their actions via a perfect communication channel while it receives the observations of agents through an imperfect communication channel 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' The considered setting is similar to conventional centralized control of MASs [18], [30], except for the fact that the communications from the agents to the CC are transmitted over a bit-budgeted communication channel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' The agent-hub communications are considered to be instantaneous and synchronous [18].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' This is in contrast with the delayed [17], [31] and sequential/iterative communication models [32]–[34].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' We note that there is no direct inter-agent communication in the considered system - communications occur only between agents and the central controller.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' The system runs on discrete time steps t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' The observation of each agent i at time step t is shown by oi(t) ∈ Ω and the state s(t) ∈ S of the system is defined by the joint observations s(t) ≜ ⟨o1(t), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' , oN(t)⟩4 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' The control action of each agent i at time t is shown by mi(t) ∈ M, and the action vector m(t) ∈ MN of the system is defined by the joint actions m(t) ≜ ⟨m1(t), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=', mN(t)⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' The observation space Ω, state- space S, and action space M are all discrete sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' The environ- ment is governed by an underlying5 Markov Decision Process 3In this work we follow a common assumption used in the networked control literature [29] according to which the bit-budget only limits the uplink communications of the agents and not their downlink.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Accordingly, the agents select their control actions as is dictated to them by the central controller.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 4According to this definition, at any given time t the observations of any two agent i, j ∈ N are linearly independent in the Euclidean space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' The same conditions are true for the control actions of arbitrary agents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 5As defined in the literature [10], the underlying MDP’ is the horizon-T ′ MDP defined by a hypothetical single agent that takes joint actions m(t) ∈ MN and observes the nominal state s(t) ≜ ⟨o1(t), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' , oN(t)⟩ that has the same transition model T(·) and reward model r(·) as the environment experienced by our MAS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Symbol Meaning x(t) A generic random variable generated at time t x(t) Realization of x(t) X Alphabet of x(t) |X| Cardinality of X px � x(t) � Shorthand for Pr � x(t) = x(t) � H � x(t) � Information entropy of x(t) (bits) X−x X − {x} Ep(x){x} Expectation of the random variable X over the probability distribution p(x) tr(t) Realization of the system’s trajectory at time t Table I TABLE OF NOTATIONS 𝑃𝑟 𝑠′ 𝑠,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 𝑚) Environment Central Controller 𝜋1 𝑐 𝜋𝑚 Agent 1 Actuator 𝜋2 𝑐 Actuator Agent N 𝑐1 𝑐𝑁 Channel log2 |𝒞| ≤ 𝑅 log2 |𝒞| ≤ 𝑅 ǁ𝑐1 ǁ𝑐𝑁 𝑚1 𝑚𝑁 𝑚2 𝑚1 𝑜1 𝑜2 Channel Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Illustration of the interactions of the CC and agents for the control of the environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' The red link shows the communication channels that are bit-budgeted - implying the local (and not global) observability of the CC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' The orange dashed box is detailing the same box in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 1 and Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' that is described by the tuple M = � S, MN, r(·), γ, T(·) � , where r(·) : S × MN → R is the per-stage reward function and the scalar 0 ≤ γ ≤ 1 is the discount factor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' The function T(·) : S × MN × S → [0, 1] is a conditional probability mass function (pmf) which represents state transitions such that T � s(t + 1), s(t), m(t) � = Pr � s(t + 1)|s(t), m(t) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' According to the per-stage reward signals, the system’s return within the time horizon T ′ is denoted by g(t ′) = �T ′ t=t′ γt−1r � o1(t), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=', oN(t), m1(t), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=', mN(t) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' (1) While the system state is jointly observable by the agents [35], each agent i’s observation oi(t) is local 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Once per time step, agent i ∈ N is allowed to transmit its local observations through a communication message ci(t) to the CC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' The communications between agents and the central controller are done in a synchronous (not sequential) and simultaneous (not delayed) fashion [17].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Each agent i generates its communi- cation message ci(t) by following its communication policy πc i (·) : Ω → C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' In parallel to all other agents, agent i follows the communication policy πc i (·) to map its current observation oi(t) to the communication message ci(t) which will be received by the central controller in the same time- step t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' The code-book C is a set composed of a finite number of communication code-words s c, c′, c′′, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=', c(|C|−1) - we use the same notation to refer to the different members of the action, observation and state spaces too.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Agents’ communica- tion messages are sent over an error-free finite-rate bit pipe, with its rate constraint to be R ∈ R (bits per channel use) or equivalently (bits per time step).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' As a result, the size of the quantization codebook should follow the inequality |C| ≤ 2R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' The CC exploits the received communication messages c(t) ≜ ⟨c1(t), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=', cN(t)⟩ within the last d number of time-steps to generate the action signal m(t) following the control policy πm(·) : CNd → MN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Based on the above description, the environment from the point of view of the CC 6In our problem setting, each agent does not see the environment as an MDP due to their local observability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' We only assume the presence of an underlying MDP for the environment, which is widely adopted in the literature for the reinforcement learning algorithm, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=', [36] [37].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' We have this assumption as our performance guarantees rely on the optimality of the solution provided for the control task, which is also assumed in [7], [10].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Let us recall that throughout all of our numerical studies, even the CC, given joint observations of all agents, cannot observe the true/nominal state of the environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 5 as well as from the agent’s point of view is not necessarily an MDP - as none is capable of viewing the nominal state of the environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Problem statement: Joint Control and Communication De- sign (JCCD) problem Now we define the JCCD problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Let M be the MDP governing the environment and the scalar R ∈ R to be the bit-budget of the uplink of all agents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' At any time step t′, we aim at selecting the tuple π = ⟨πm(·), πc⟩ with πc ≜ ⟨πc 1(·), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=', πc N(·)⟩ to solve the following variational dynamic programming argmax π Eπ � g(t′) � ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' |C| ≤ 2R, (2) where the expectation is taken over the joint pmf of the system’s trajectory {tr}T ′ t′ = o1(t′), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=', oN(t′), m(t′), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=', o1(T ′), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=', oN(T ′), m(T ′), when the agents follow the policy tuple π.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' In the next section, similar to [18] we will disentangle the design of action and communication policies via action-based quantization of observations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' In contrast to [18], here the communication network of the MAS is assumed to follow a star topology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' The idea behind this disentanglement is to extract the features of the control design problem that can affect the communication design and to take them into account while designing the communications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Thus our communication design will be aware of the key features of the control task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' We extract the key features of the control task using analytical techniques as well as reinforcement learning [17], [18].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' In fact, the new communication problem called TODC, will no longer be similar to the conventional communication problems, as it is inspired by the JCCD problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' In [18], [23], authors use the value of agents’ observations for the given task as the key feature of the control task considered in the communication design.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Accordingly, the idea was to cluster together the observation points that have similar values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' In contrast to [18], [23], which considers the value of observations as an explicit key feature of the control task, here we consider the optimal control/action values assigned to each observation as the key feature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Accordingly, ABSA clusters the observation values together, whenever the observation points have similar optimal control/action values assigned to them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Action-based state aggregation has been already introduced in the literature of reinforcement learning as a means for reducing the complexity of the reinforcement learning algorithms while maintaining the average return performance [38], [39].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' ACTION-BASED LOSSLESS COMPRESSION OF OBSERVATIONS In this section, we will set yet another example - in addition to [18] - for the use of a generic framework to solve JCCD problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' In [18], a similar problem is solved for distributed control and quantization, wherein, the authors disentangle the design of task-oriented communication policies and action policies given the aid of a hypothetical functional Πm∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' In particular, the functional Πm∗ is a map from the vector space Kc of all possible communication policies πc to the vector space Km of optimal corresponding control policy πm∗(·).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Upon the availability of the functional Πm∗, wherever the function πm appears in the JCCD problem, it can be replaced with Πm∗(πc) resulting in a novel problem in which only the communication policies πc are to be designed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' While in [18], authors use an approximation of Πm∗(πc) to obtain a task-oriented quantizer design problem, in the current work we derive an exact solution for a simplified version of (3) - where the number of agents communicating with the central controller is limited to one agent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' To adapt ABSA to the generic setting of the problem (3), in ABSA-2, we will lift this limitation given the aid of an approximation technique.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' The JCCD problem can already be formulated as a form of data-quantization problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Lemma 1, identifies the quan- tization metric that we aim to optimize in this paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' It reformulates the JCCD problem as a novel generalized data quantization problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Lemma 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' The JCCD problem (2) can also be expressed as a generalized data quantization problem as follows argmin π Ep(s(t)) ���V π∗� s(t) � − V πm� c(t) ����, s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' |C| ≤ 2R, (3) where the communication vector c(t) generated by πc is a quantized version of the system’s state s(t).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Appendix A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' ■ In contrast to the classic data-quantization problems, here the distortion metric, measures the difference between two dif- ferent functions of the original signal and its quantized version namely V π∗(·) and V πm(·) - thus the distortion measure that we aim to optimize by solving (3) is not conventional.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' In fact, the variational minimization problem is solved over the vector space of joint quantization policies πc and action policy πm functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' ABSA-1 Algorithm The applicability of the proposed ABSA-1, is limited to two mathematically equivalent scenarios: (i) we have a single agent communicating to the CC - consider the Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 2-a, with only one agent connected to the CC - or (ii) that the agents communicate with the CC through a relay.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' In the latter scenario, the relay has full access to the agents’ communication observation, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=', oi, ∀i ∈ N, while the relay to CC channel is bit-budgeted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' This limited scenario is useful for us to facilitate our analytical studies on the problem (3), allowing us to establish theoretical proof for the losslessness of compression in ABSA-1 as well as its optimal average return performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' These statements will be confirmed by Lemma 2 - the results of which will also be useful to design ABSA-2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' The central idea of ABSA-1 is to represent any two states s(i), s(j) using the same communication message c iff π∗� s(i)� = π∗� s(j)� , where π∗(·) : S → MN is the optimal control policy of the agents, given the access of observations from all agents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Thus, ABSA-1 and ABSA-2 solve the JCCD problem at three different phases: (i) solving the centralized control problem under perfect communications via reinforcement learning i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=', 6 Q-learning, to find π∗(·)7, (ii) solving the task-oriented data quantization problem to find πc via a form of data clustering, (iii) finding the πm corresponding to πc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' In order to explain ABSA-1, we introduce the problem of task-oriented data compression with centralized control.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' TBIC is derived using similar techniques in [18] but for a different setting i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=', the communication network of MAS has a star topology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' The TBIC problem is no longer a joint control and communication problem but is a quantization design problem in which the features of the control problem are taken into account.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' To arrive to TODC problem from the JCCD problem, we use the functional Πm∗ to replace πm(·) with Πm∗� πc� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Upon the availability of Πm∗, by plugging it into the JCCD problem (2), we will have a new problem argmin πc Ep(s(t)) ���V π∗� s(t) � − V Πm∗� πc�� c(t) ����, s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' |C| ≤ 2R, (4) where we maximize the system’s return with respect to only the communication policies πc(·) of the local relay.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' The optimal control policy πm∗(·) of the CC is automatically computed by the mapping Πm∗� πc(·) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' The problem is called here as the TODC problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Upon the availability of Πm∗, the JCCD problem (2) can be reduced to (4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Definition 1 is provided to formalize a precise approach to solve (4) via obtaining the communication policy of the relay πc(·) as well as the corresponding Πm∗, to solve (2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Quantization and control policies in ABSA-1: The communication policy πc,ABSA−1(·) designed by ABSA-1 will be obtained by solving the following k-median clustering problem min P �|C| i=1 � s(t)∈Pi ���π∗� s(t) � − µi ���, (5) where P = {P1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=', PB} is a partition of S and µi is the centroid of each cluster i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' The communication policy of ABSA- 1 - πc,ABSA−1(·) - is an arbitrary non-injective mapping such that ∀k ∈ {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=', B} : πc,ABSA−1(s) = c(k) if and only if s ∈ Pk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Now let Cg be a function composition operator such that Cgf = g ◦ f.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' We define the operator Πm∗ ≜ Cg, with g = π∗� πc,ABSA−1−1(·) �8 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' The optimality of the proposed ABSA-1 algorithm is sub- sequently provided in Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' The communication policy πc,ABSA−1 - as de- scribed by Definition 1 - will carry out lossless compression of observation data w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' the average return if |C| ≥ |M|N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Appendix B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' ■ Remark: ABSA-1 will also carry out lossless compression of observation data with respect to the distortion measure 7ABSA’s bottleneck arises from the increasing complexity of Q-learning as agents increase in number N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Similar limitations are in place for any other algorithm that requires a centralized training phase [7], [30] 8Note that as πc,ABSA−1(·) is non-injective, its inverse would not produce a unique output given any input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Thus, by π∗� πc,ABSA−1−1(c′) � we mean π∗� s′� , where s′ can be any arbitrary output of πc,ABSA−1−1(c′).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' introduced in problem (3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Given the proofs of lemma 2 and lemma 1, the proof of this remark is straightforward and is therefore, omitted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' The losslessness of quantization in ABSA-1 implies that the πABSA−1 will result in no loss of the system’s average return, compared with the case where the optimal policy π∗(·) is used to control the MAS under perfect communications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Consequently, the control policy πm,ABSA−1(·) is optimal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Let us recall once again that here, we do not use a conventional quantization distortion metric, we select a representation of local observation in such a way that the conveyed message maximizes the average task return.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Note that in [7], the authors do not find the higher order function Πm∗ that reduces the joint communications and control problem to a task-oriented communication design - instead they solve an approximated version of the task-oriented communication design problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' In this paper, however, we introduce a closed form Πm∗ by ABSA-1 that can map every communication policy πc,ABSA−1 introduced by ABSA-1, to the exact optimal control policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' This implies that the solutions provided by ABSA-1 are also the optimal solutions of the joint communication and control design (JCCD) problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' ABSA-2 Algorithm We saw earlier in lemma 2 that the communication policy obtained by solving the problem 5 is optimal and can result in a lossless average return performance when |C| ≥ |M|N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' To solve the problem 5, however, we need to know π∗� s(t) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' This is a limiting assumption that in ABSA-1 can be translated to two different system models which are less general than the system pictured in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 3: (i) presence of an extra relay between the agents and the central controller where the relay has perfect downlink channels to agents and a single bit- budgeted channel to the CC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' (ii) The MAS is only composed of one single agent and a CC where the uplink of the agent is bit-budgeted but its downlink is a perfect channel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Our second proposed algorithm ABSA-2 removes the need to know π∗� s(t) � and can run under the more general settings shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' This is done by approximating the local element m∗ i (t) of π∗� s(t) � = ⟨m1 ∗ (t), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=', mN ∗ (t)⟩ at agent agent i given the local observation of this agent oi(t).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' That is, given a centralized training phase, we will have access to the empirical joint distribution of p(oi, m∗ i ), using which we can obtain a numerical MAP estimator of ˆ m∗i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Thus ABSA-2 allows for fully distributed communication policies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' In particular, the encoding of the communication messages of each agent is carried out separately by them before they communicate with CC or any other agent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' This form of encoding is often referred to as distributed encoding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Furthermore, the encoding carried out by ABSA-2 at each agent is a low-complexity and low-power process that requires no inter-agent communications before hands.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' In this case, each agent directly communicates its encoded observations to the CC via a bit-budgeted communication channel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' In order to improve the learning efficiency at CC, it can take into account all the communications received in the time frame [t − d, t] to make a control decision m(t).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Therefore, the ABSA-2 7 \u0de4𝜋𝑖 ∗ ⋅ : Ω → ℳ Ω Ω ⊂ ෑ 𝑖=1 𝑁 ℝ Ω × ℳ Clustering observation points over their action values 𝒫i,1 𝒫𝑖,2 𝒫𝑖,3 ABSA-2 Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Abstract representation of states in ABSA-2 with |C| = 3 and |M| = 5 - |M| is represented by the number of shapes selected to show the observation points and |C| is represented by the number of clusters shown in the right subplot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' The left subplot shows the observation points prior to aggregation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' During a centralized training phase we first compute π∗(·) according to which π∗ i (·) : Ω → M can be obtained.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' We use the surjection π∗ i (·) to map a high dimensional/precision observation space to a low dimensional/precision space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' The middle subplot shows the observation points together with the action values assigned to them - each unique shape represents a unique action value.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' This new representation of the observation points, embeds the features of the control problem into the data quantization problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Finally, we carry out the clustering of observation points according to their action values - all observation points assigned to (a set of) action values are clustered together.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' The right subplot shows the aggregated observation space, where all the observation points in each cluster will be represented using the same communication message.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' The centralized controller which is run using DQN, observes the environment at each time step, through all these aggregated observations/communications it receives from all the agents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' algorithm can strike a trade-off between the complexity of the computations carried out at the CC - directly impacted by the value of d - and effectiveness of agents’ communications inversely impacted by the value of |C|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Moreover, ABSA-2 is straightforwardly extendable to the different values of |C| per each agent i, instead of having only one fixed bit-budget R = log2 |C| for all agents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' As illustrated in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 4, ABSA-2, each agent i obtains a communication policy function πc i (·) by solving a clustering problem over its local observation space instead of the global state space, formulated as follows: min Pi �|C| j=1 � oi(t)∈Pi,j ���˜π∗ i (oi(t)) − µi,j ���, (6) where Pi = {Pi,1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=', Pi,|C|} is a partition of Ω, and ˜π∗ i (oi(t)) = argmaxm∗ i pπ∗(m∗ i |oi(t)), (7) and m∗ i is the optimal action of agent i, which is i-th element of m∗ ≜ π∗� o1(t), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=', oN(t) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Thus ˜π∗ i (oi(t)) is the maximum aposteriori estimator of m∗ i = π∗� s(t) � given the local observation oi(t).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Once the clustering in (6) is done, each agent i will train its local communication policy πc,ABSA−2 i (·), which is any non-injective mapping such that ∀k ∈ {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=', |C|} : πc,ABSA−2 i (oi) = c(k) iff oi ∈ Pi,k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' After obtaining the communication policies ⟨πc,ABSA−2 i (·)⟩N i=1, to obtain a proper control πm(·) policy at the CC corresponding to the com- munication policies, we perform a single-agent reinforcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' To this end and to manage the complexity of the algorithm for larger values of d, we propose to use DQN architecture [41] at the CC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' PERFORMANCE EVALUATION In this section, we evaluate our proposed schemes via nu- merical results for the popular multi-agent geometric consen- Algorithm 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Action Based State Aggregation (ABSA-2) 1: Initialize replay memory D to capacity 10’000.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 2: Initialize state-action value function Q(·) with random weights θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 3: Initialize target state-action value function Qt(·) with weights θt = θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 4: Obtain π∗(·) and Q∗(·) by solving (2) using Q-learning [40]*, where R >> H(oi(t)) ∀i ∈ N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 5: Compute π∗ i (oi(t)) = Mode � m∗ i |oi(t) � , for ∀oi(t) ∈ Ω, for i ∈ N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 6: Solve problem (5) by applying k-median clustering to obtain Pi and πc i (·) , for i ∈ N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 7: for each episode k = 1 : 200’000 do 8: Randomly initialize observation oi(t = 0), for i ∈ N 9: Randomly initialize the message c(t = 0) 10: for t = 1 : T ′ do 11: Select ci(t), at agent i, following πc i (·), for i ∈ N 12: Obtain the message ⟨c1(t), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=',' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' cN(t)⟩ at the CC 13: Follow ϵ-greedy,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' at CC,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' to generate the action mi(t),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' for i ∈ N 14: Obtain reward r(t) = R � s(t),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' m(t) � at the CC 15: Store the transition � c(t),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' m(t),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' r(t),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' c(t + 1) � in D 16: t ← t + 1 17: end 18: Sample D′ = � c(t′),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' m(t′),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' r(t′),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' c(t′ +1) �t′=t′ 62 t′=t′ 1 from D 19: for each transition t′ = t′ 1 : t′ 62 of the mini-batch D′ do 20: Compute DQN’s average loss Lt′(θ) = 1 2 � r(t′) + max m∗ Qt� c(t′ + 1),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' m∗,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' θt� − max m∗ Q � c(t′),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' m∗,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' θ ��2 ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 21: Perform a gradient descent step on Lt′(θ) w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='t θ 22: end 23: Update the target network Qt(·) every 1000 steps 24: end 8 sus problem9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Through indirect design, ABSA-1 and ABSA- 2 never rely on explicit domain knowledge about any spe- cific task, such as geometric consensus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Thus, we conjecture that their indirect design allows them to be applied beyond geometric consensus problems and to a much wider range of tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' To make the geometric consensus task suitable for the evaluation of our proposed algorithms, similar to [18], we have introduced a bit constraint to the communication channel between the agents and the CC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' After evaluating the proposed algorithms in the context of the rendezvous problem, we attempt to explain the behaviour of all the algorithms via the existing metric - positive listening - for measuring the task- effectiveness of communications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' As positive listening falls short in explaining all the aspects of the behaviour of the investigated algorithms, we will also introduce a new metric.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Called task relative information, the new metric assists to further explain the behaviour of different algorithms with a higher accuracy and reliability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' The geometric consensus problem Our proposed schemes are evaluated in this section through numerical results for the rendezvous problem [42], [43], which is a specific type of geometric consensus problems under finite observability [28].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Following the instantaneous and synchronous communication model and the star network topology explained in section II-A and Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 2 respectively, the rendezvous problem is explained as the following.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' At each time step t several events happen in the following order.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' First, an agent i obtains a local observation oi(t) - which is equivalent to its own location in the grid-world.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' The agent i, subsequently, follows its quantization/communication policy to generate a compressed version ci(t) of its observation to be communicated to the CC via bit-budgeted communication links.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' After receiving the quantized observations of all agents, CC follows its control policy to decide and select the joint action vector m(t) and communicate each agent i’s local action mi(t) to it accordingly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' The local action mi(t) ∈ M that is communicated back to the agent i via a perfect communication channel is a one directional move in the greed world, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='e, M = { left, right, up, down, pause}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Given each agent i’s action mi(t) the environment evolves and transitions to the next time step t + 1 where each agent i obtains a new local observation oi(t + 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' All agents receive a single team reward rt = � � � � � C1, if ∃ i, j ∈ N : oi(t) ∈ ΩT & oj(t) /∈ ΩT C2, if ∄ i ∈ N : oi(t) ∈ Ω − ΩT , 0, otherwise, (8) where C1 < C2 and ΩT is the set of terminal observations i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=', the episode terminates if ∃ i ∈ N : oi(t) ∈ ΩT .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Accordingly, when not all agents arrive at the target point, a smaller reward C1 = 1 is obtained, while the larger reward C2 = 10 is attained when all agents visit the goal point at the same time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 9In our numerical experiments, the discount factor is assumed to be γ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' All experiments are done over a grid world of size 8×8, where the goal point of the rendezvous is located at the grid number ΩT = {22}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' We compare our proposed ABSA algorithms with the heuristic non-communicative (HNC), heuristic optimal communication (HOC) and SAIC algorithms proposed in [18] which are direct schemes to jointly design the communication and control policies for the specific geometric consensus problem solved here.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' In contrast to ABSA-1 and ABSA-2 which enjoy an indirect design, the direct design of HOC and HNC does not allow them to be applied in any other problem rather than the specific geometric consensus problem with the finite observability i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=', the rendezvous problem explained here.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Numerical experiment A constant learning rate α = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='07 is applied when exact Q- learning is used to obtain π∗(·) and α = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='0007 when DQN is used to learn πm(·) for ABSA-2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' For the exact Q-learning, a UCB10 exploration rate of c = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='25 considered.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' The deep neural network that approximates the Q-values is considered to be a fully connected feed-forward network with 10 layers of depth, which is optimized using the Adam optimizer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' An experience reply buffer of size 10’000 is used with the mini- batch size of 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' The target Q-network is updated every 1000 steps and for the exploration, decaying ϵ-greedy with the initial ϵ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='05 and final ϵ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='005 is used [41].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' In any figure that the performance of each scheme is reported in terms of the averaged discounted cumulative rewards, the attained rewards throughout training iterations are smoothed using a moving average filter of memory equal to 20,000 iterations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' As explained in section III-A, ABSA-1 and ABSA-2 both require a centralized training phase prior to be capable of being executed in a distributed fashion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' For all black curves, one prior centralized training phase to obtain π∗(·) is required.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' As detailed in Section III, the proposed algorithms, ABSA-1 and ABSA-2, leverage π∗(·) to design πc and then πm afterwards.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Dashed curves, HOC and HNC, as proposed by [18] provide heuristic schemes which exploit the domain knowledge of its designer about the rendezvous task making it not applicable to any other task rather than the rendezvous problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' While HOC enjoys a joint control and communication design, HNC runs with no communication.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Note that HNC & HOC require communica- tion/coordination between agents prior to the starting point of the task - which is not required for any other scheme.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' These schemes, introduced by [18], are detailed as the following.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' A joint communication and control policy is designed using domain knowledge in the rendezvous problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' HNC agents approach the goal point and wait nearby for a sufficient number of time steps to ensure that the other agent has also arrived.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Only after that, they will get to the goal point.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Note that this scheme requires communication/coordination between agents prior to the starting point of the task, since they have to have had agreed upon this scheme of coordination.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' A joint communication and control policy is designed using domain knowledge in the rendezvous problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 10UCB is a standard scheme used in exact reinforcement learning to strike a trade-off between the exploration and exploitation [40].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 9 0 2 4 6 8 10 12 14 16 18 Training Iterations 104 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='5 1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='5 2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='5 3 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='5 4 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='5 5 Average Return Figure 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Average return comparison made between the proposed schemes and some benchmarks introduced in [18] - the three agent scenario under constant bit-budget values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' HOC agents wait next to the goal point until the other agent informs them that they have also arrived there.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Only after that, they will get to the goal point.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Note that this scheme requires communication/coordination between agents prior to the starting point of the task, since they have to have had agreed upon this scheme of coordination and communications as well as on the the meaning that each communication message entails.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' To obtain the results demonstrated in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 5, we have simulated the rendezvous problem for a three-agent system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' The black curves illustrate the training phase that is occurring at CC to obtain πm after πc is already computed using equations (5) and (6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' We observe the lossless performance of ABSA-1 in achieving the optimal average return without requiring any (2nd round) training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' To enable fully decen- tralized quantization of the observation process, ABSA-2 was proposed which is seen to approach the optimal solution as d grows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' All ABSA-2 curves are plotted with |C| = 3, and ABSA-1 curve is plotted with |C| = |M|N = 125 in 3 agent scenarios - Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 5 - and |C| = |M|N = 25 in the two agent scenario - Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 5, we see how the performance of ABSA-2 compares with HNC, HOC and SAIC at different rates of quantization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' As expected, with the increase in the size of the quantiza- tion codebook, the average return performance of ABSA-2 is gradually improved, such that it approaches near-optimal performance at d = 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' We also observe the superior per- formance of ABSA-2 compared with SAIC at very tight bit- budgets where SAIC’s performance sees a drastic drop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' It is observed that as d grows, ABSA-2 approaches the optimal return performance even under higher rates of quantization, however, higher values of d come at the cost of the increased computational complexity of ABSA-2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Explainablity of the learned communication policies One common metric to evaluate the effectiveness of communications in the literature [37] is positive listening I � ci(t);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' mj(t) � j ∈ N −{i}, which is the mutual information 2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='5 3 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='5 4 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='5 5 Size of the quantization codebook 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='8 1 Normalized Average Return Figure 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' The obtained normalized average return as a function of codebook size |C| is compared across a range of schemes: proposed schemes and some benchmarks introduced in [18] - two-agent scenario.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' between the communication ci(t) produced by an agent i and the action mj(t) selected by another agent following the receipt of the communication ci(t) from agent i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Positive signaling I � oi(t);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' ci(t) � is another metric proposed by [37], measuring the mutual information between agent i’s observa- tion oi(t) and its own produced communication message ci(t) at the same time step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' As to be shown below, however, these metrics are unable to fully capture the underlying performance trends of all schemes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Therefore, we, for the first time, introduce a new metric called task relevant information (RI) allowing us to explain the task-effectiveness of the learned communication policies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Measuring positive listening is one way to quantify the contribution of the communicated messages of agent i to the action selection of agent j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Positive signalling, on the other hand, measures the consistency as well as the relevance of the communicated messages ci(t) and the agent’s observations oi(t).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' As SAIC and ABSA use a deterministic mapping of observation oi to produce the communication message ci, they are always guaranteed to have positive signalling [37] - the degree of which, however, is limited by the uplink channel’s bit budget R = log2 |C|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Thus, among the existing metrics for the measurement of the effectiveness of communications, we limit our numerical studies to the measurement of positive listening.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' It is known that the higher positive listening is, the stronger (not necessarily better) we expect the coordination between the agents to be.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' That is, the higher positive listening means higher degree of dependence between agents (their actions and observations) which is not necessarily sufficient for the team agents to fulfill the task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Figure 7 explains how stronger coordination between agents and the CC is often resulting in an increased performance of the MAS in obtaining a higher average return.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' For instance, the enhancement in the positive-listening performance of SAIC from |C| = 3 to |C| = 4 quantizer in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 7 is resulting in an improved average return performance, as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' This metric also reasonably explains the enhancement of ABSA-2 performance in obtaining higher return by increasing d - the memory of the CC - and the size of the quantization codebook 10 2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='5 3 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='5 4 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='5 5 Size of the quantization codebook 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='7 Positive Listening (bits) Figure 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Comparing the positive listening I � ci(t);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' mj(t) � performance across a range of schemes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' |C|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Moreover, stronger coordination between agents and CC is visible in ABSA-2 when compared with HOC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Thus, we expect better average return performance for ABSA-2 which is in contrast to the results of Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' This event suggests that stronger coordination - measured by positive listening may not necessarily result in an improved average return performance as the coordination may not be perfectly aligned with task needs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' The curve concerning the HOC scheme allows us to recall that a positive listening of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='3 (bit) is sufficient to maintain the coordination required for optimal performance in the afore- mentioned geometric consensus task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Therefore, in the ABSA- 2 and SAIC schemes, there is still an unnecessary influence from the side of the communication messages to the actions selected by the receiving end.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' In fact, not all the information received from the receiving end has contributed to the higher average return of the system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Accordingly, there is yet, some unnecessary data in the communication messages designed by ABSA that contain no task-specific/useful information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Thus we believe that positive listening cannot explicitly quantify the effectiveness of the task-oriented communica- tion algorithms;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' therefore they fall short in explaining the behaviour of these algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Even when positive listening is computed as I (ci(t);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' m(t)) to capture the mutual information between the communication of agent i and the control signals of all agents we arrive at almost similar patterns - Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Figure 9, investigates the performance of multiple schemes via a novel performance metric: task relevant information (TRI).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Here we define the task relevant information metric to be I � πc� oi(t) � ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' π∗� s(t) �� = I � ci(t);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' m∗(t) � , (9) which measures the mutual information (in bits) between the communicated message of agent i and the vector m∗(t) of joint optimal actions at the CC - which is selected by the optimal centralized control policy π∗(·).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' As demonstrated by Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 9, TRI is an indirect metric of the effectiveness of communications that can explain the behaviour of different 2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='5 3 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='5 4 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='5 5 Size of the quantization codebook 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='8 1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='2 Positive Listening (bits) Figure 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Comparing the positive listening I (ci(t);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' m(t)) performance across a range of schemes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' communication designs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' It is also observed that the TRI metric magnifies the performance gap between different schemes as they get closer to the optimal performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Nevertheless, TRI can be utilized as a standalone measure to quantify the effectiveness of a communication design since it almost perfectly predicts the average return performance of the a com- munication policy - without the need for the communication to be tested when solving the real task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Note that, we measure the task-effectiveness of a quan- tization algorithm based on the average return that can be obtained when using it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Further, to measure the average return that can be obtained under the communication poli- cies ⟨πc 1(·), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=', πc N(·)⟩, we have to design the control policy πm(·) at the CC that selects the control vector m(t) having access to only the quantized observations of the agents c(t).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Accordingly, we cannot measure the effectiveness of the communication policy of an MAS without having a specific design for their control policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Even after the design of the control policy of the MAS, it is challenging to understand if the suboptimal performance of the algorithm is caused by an ineffective design of the control policy or the communication policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' In fact, it is hard disentangle the effect of the control and communication policies on the MAS’s average return.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Our proposed metric TRI can facilitate measuring the performance of any communication policy in isolation and without the effect of the control policy being present in the numerical values of TRI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Accordingly, the importance of introducing this metric is multi-fold: (i) by using TRI as an indirect metric we can measure the effectiveness of a communication policy for any specific task;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' (ii) it allows us to measure the effectiveness of the communication scheme prior to the design of any control policy;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' (iii) it helps to design task effective communication policies in complete separation from the control policy design.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' CONCLUSION In this paper, we have investigated the joint design of control and communications in an MAS under centralized control and distributed communication policies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' We first proposed an action-based state aggregation algorithm (ABSA-1) for 11 2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='5 3 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='5 4 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='5 5 Number of Communication Symbols 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='5 ABSA-2 d=1 ABSA-2 d=2 ABSA-2 d=3 SAIC d=1 HOC d=1 Figure 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Comparing the task relevant information (TRI) performance across a range of schemes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' It is observed that TRI can comprehensively explain the behaviour of all task-effective quantization schemes in a certain task without the need to measure their effectiveness via their resulting average return in the task - compare this figure with Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 6 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' lossless compression and provided analytical proof of its optimality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Then we proposed ABSA-2, which offers a fully distributed communication policy and can trade computational complexity for communication efficiency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' We finally demon- strated the task-effectiveness of the proposed algorithms via numerical experiments performed on a geometric consensus problem via a number of representative metrics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Furthermore, our numerical studies demonstrate the pressing need for further research on finding a metric that can measure/explain the task-effectiveness of communications with more accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' And, scalability in task-oriented design is yet another central challenge to be addressed in future research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' APPENDIX A PROOF OF LEMMA 1 Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Applying Adam’s law on equation (2) yields argmax π Ep(c(t)) � Epπc,πm ({tr}T ′ t′ |c(t)) � g(t′)|c(t) �� , s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' |C| ≤ 2R (10) where c(t) is generated by the communication policy πc and the joint pmf of the system’s trajectory {tr}T ′ t′ is directly influenced by the action policy πm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' The conditional pmf pπc,πm({tr}T ′ t′ |c(t)) is the joint probability of the trajectory of the system given the received communication c(t) when policies πc(·) and πm(·) are followed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' We proceed by negating the equation (10) and adding a second term to the objective function which is constant with respect to the decision vari- ables of the problem to have argmin πc Ep(s(t)) � Epπ∗ ({tr}T ′ t′ |s(t)) � g(t′)|s(t) �� − (11) Ep(c(t)) � Epπc,πm ({tr}T ′ t′ |c(t)) � g(t′)|c(t) �� , s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' |C| ≤ 2R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' We replace the conditional expectation of system return by the value function V (·), [40](Ch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='5), and we will have argmin πc Ep(s(t)) � V π∗� s(t) �� − Ep(c(t)) � V πm� c(t) �� , s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' |C| ≤ 2R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' (12) Note that the empirical joint distribution of c(t) can be obtained by following the communication policy πc on the empirical distribution of s(t).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' argmin πc Ep(s(t)) � V π∗� s(t) �� − Ep(s(t)) � V πm� c(t) �� , s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' |C| ≤ 2R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' (13) As V π∗� s(t) � − V πm� c(t) � ≥ 0 is true for any s(t) ∈ S, merging the two expectations results in argmin πc Ep(s(t)) ���V π∗� s(t) � − V πm� c(t) ����, s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' |C| ≤ 2R, (14) which concludes the proof of the lemma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' ■ APPENDIX B PROOF OF LEMMA 2 Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' We depart from the result of lemma 1 - problem (3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' By taking the expectation over the empirical distribution of s(t) and applying Bellman optimality equation, we obtain argmin π 1 n n � t=1 ���Qπ∗� s(t), π∗(s(t)) � −Qπm� c(t), πm� πc(s(t)) �����, s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' |C| ≤ 2R, (15) where the vector πc(s(t)) is of N dimensions and its i-th element is ci(t).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' We proceed by plugging πc,ABSA−1(·) and Πm∗, according to the definition 1, into the equation (15) to obtain 1 n n � t=1 ���Qπ∗� s(t), π∗(s(t)) � − Qπ∗� c(t), π∗� s′�����, (16) where s′ = πc,ABSA−1−1� πc,ABSA−1� s(t) �� , and any pos- sible value for it lies in the same subset Pk′ as s(t) does, while according to the definition of Pk′, we know π∗(s(t)) = π∗(s′), if |C| ≥ |M|N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Thus, by replacing π∗(s′) in with π∗(s(t)) in equation (17) we get 1 n n � t=1 ���Qπ∗� s(t), π∗(s(t)) � − Qπ∗� s(t), π∗� s(t) ����� = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' (17) This concludes the proof of theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' ■ REFERENCES [1] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Vailshery, “Number of internet of things (iot) connected devices worldwide from 2019 to 2021, with forecasts from 2022 to 2030,” Aug 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Available: https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='statista.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='com/statistics/1183457/ iot-connected-devices-worldwide/ [2] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' G¨uler, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Yener, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Swami, “The semantic communication game,” IEEE Transactions on Cognitive Communications and Network- ing, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 4, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 4, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 787–802, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' [3] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Tong, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Yang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Wang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Hu, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Saad, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Yin, “Federated learning based audio semantic communication over wireless networks,” in 2021 IEEE Global Communications Conference (GLOBECOM), 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 1–6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' [4] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Pappas and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Kountouris, “Goal-oriented communication for real- time tracking in autonomous systems,” in 2021 IEEE International Conference on Autonomous Systems (ICAS), 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 1–5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' [5] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Calvanese Strinati and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Barbarossa, “6g networks: Beyond shannon towards semantic and goal-oriented communications,” Computer Net- works, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 190, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 107930, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' [6] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Mostaani, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Vu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Sharma, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Liao, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Chatzinotas, “Task-oriented communication system design in cyber-physical systems: A survey on theory and applications,” arXiv preprint arXiv:2102.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='07166, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 12 [7] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Foerster, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Assael, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' de Freitas, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Whiteson, “Learning to communicate with deep multi-agent reinforcement learning,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Advances in Neural Information Processing Systems, Barcelona, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' [8] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Shannon and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Weaver, “The mathematical theory of communi- cation [1949].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' urbana, il,” 1959.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' [9] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Hu, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Wu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Xing, and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Wang, “Things2vec: Semantic modeling in the internet of things with graph representation learning,” IEEE Internet of Things Journal, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 7, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 3, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 1939–1948, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' [10] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Cai, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Zhong, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Luo, “Seminer: Side-information-based seman- tics miner for proprietary industrial control protocols,” IEEE Internet of Things Journal, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 9, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 22, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 22 796–22 810, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' [11] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Tung, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Kobus, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Roig, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' G¨und¨uz, “Effective communi- cations: A joint learning and communication framework for multi-agent reinforcement learning over noisy channels,” IEEE Journal on Selected Areas in Communications, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 39, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 8, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 2590–2603, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' [12] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Mota, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Valcarce, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='-M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Gorce, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Hoydis, “The emergence of wireless mac protocols with multi-agent reinforcement learning,” arXiv preprint arXiv:2108.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='07144, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' [13] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Shlezinger and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Eldar, “Deep task-based quantization,” Entropy, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 23, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 1, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 104, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' [14] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Gutierrez-Estevez, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Wu, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Zhou, “Learning to commu- nicate with intent: An introduction,” arXiv preprint arXiv:2211.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='09613, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' [15] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Zhang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Zou, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Lasaulce, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Saad, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Kountouris, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Bennis, “Goal-oriented communications for the iot and application to data compression,” arXiv preprint arXiv:2211.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='05378, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' [16] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Shlezinger and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Eldar, “Task-based quantization with application to mimo receivers,” arXiv preprint arXiv:2002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='04290, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' [17] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Mostaani, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Simeone, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Chatzinotas, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Ottersten, “Learning- based physical layer communications for multiagent collaboration,” in 2019 IEEE Intl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Symp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' on Personal, Indoor and Mobile Radio Communications, Sep.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' [18] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Mostaani, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Vu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Chatzinotas, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Ottersten, “Task-oriented data compression for multi-agent communications over bit-budgeted channels,” IEEE Open Journal of the Communications Society, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 3, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 1867–1886, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' [19] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Kountouris and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Pappas, “Semantics-empowered communication for networked intelligent systems,” IEEE Communications Magazine, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 59, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 6, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 96–102, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' [20] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Carnap, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Bar-Hillel et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=', “An outline of a theory of semantic information,” 1952.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' [21] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Zhang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Shao, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Tao, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Bi, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Letaief, “Deep learning- enabled semantic communication systems with task-unaware transmitter and dynamic data,” arXiv preprint arXiv:2205.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='00271, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' [22] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Stavrou and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Kountouris, “A rate distortion approach to goal- oriented communication,” in 2022 IEEE International Symposium on Information Theory (ISIT).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' IEEE, 2022, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 590–595.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' [23] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Mostaani, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Vu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Chatzinotas, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Ottersten, “State ag- gregation for multiagent communication over rate-limited channels,” in GLOBECOM 2020-2020 IEEE Global Communications Conference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' IEEE, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 1–7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' [24] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Kim, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Moon, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Hostallero, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Kang, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Lee, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Son, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Yi, “Learning to schedule communication in multi-agent reinforcement learning,” in Intl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' on Learning Representations, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' [25] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Liu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Shao, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Zhang, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Poor, “An indirect rate-distortion characterization for semantic sources: General model and the case of gaussian observation,” arXiv preprint arXiv:2201.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='12477, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' [26] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='-M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Chou, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Li, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='-M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Chien, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='-c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Lan, “A feasibility study on vehicle-to-infrastructure communication: Wifi vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' wimax,” in 2009 tenth international conference on mobile data management: systems, services and middleware.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' IEEE, 2009, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 397–398.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' [27] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Liu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Tian, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Ma, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Glaser, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='-W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Kuo, and Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Kira, “Who2com: Collaborative perception via learnable handshake commu- nication,” in 2020 IEEE International Conference on Robotics and Automation (ICRA).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' IEEE, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 6876–6883.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' [28] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Barel, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Manor, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Bruckstein, “Come together: Multi-agent geometric consensus,” arXiv preprint arXiv:1902.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='01455, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' [29] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Tatikonda and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Mitter, “Control under communication constraints,” IEEE Transactions on automatic control, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 49, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 7, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 1056–1068, 2004.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' [30] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Foerster, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Farquhar, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Afouras, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Nardelli, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Whiteson, “Counterfactual multi-agent policy gradients,” in Thirty-Second AAAI Conference on Artificial Intelligence, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' [31] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Oliehoek, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Amato et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=', A concise introduction to decentralized POMDPs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Springer, 2016, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' [32] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Ding, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Hong, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Zhu, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Huang, and Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Lu, “Sequential commu- nication in multi-agent reinforcement learning,” 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' [33] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Albowicz, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Chen, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Zhang, “Recursive position estimation in sensor networks,” in Proceedings Ninth International Conference on Network Protocols.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' ICNP 2001.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' IEEE, 2001, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 35–41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' [34] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Dorvash and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Pakzad, “Stochastic iterative modal identification al- gorithm and application in wireless sensor networks,” Structural Control and Health Monitoring, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 20, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 8, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 1121–1137, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' [35] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Pynadath and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Tambe, “The communicative multiagent team decision problem: Analyzing teamwork theories and models,” Journal of Artificial Intelligence Research, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 16, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 389–423, Jun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 2002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' [36] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Oliehoek, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Spaan, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Vlassis et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=', “DEC-PoMDPs with delayed communication,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Multi-agent Sequential Decision- Making in Uncertain Domains, Honolulu, Hawaii, May 2007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' [37] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Lowe, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Foerster, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='-L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Boureau, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Pineau, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Dauphin, “On the pitfalls of measuring emergent communication,” in Intl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' on Autonomous Agents and MultiAgent Systems, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' [38] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Li, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Walsh, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Littman, “Towards a unified theory of state abstraction for mdps.” in AI&M, 2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' [39] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' McCallum, Reinforcement learning with selective perception and hidden state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' University of Rochester, 1996.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' [40] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Sutton and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Barto, Introduction to reinforcement learning, 2nd ed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' MIT Press, Nov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 2017, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 135.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' [41] V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Mnih, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Kavukcuoglu, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Silver, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Rusu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Veness, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Bellemare, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Graves, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Riedmiller, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Fidjeland, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Ostrovski et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=', “Human-level control through deep reinforcement learning,” nature, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 518, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 7540, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 529–533, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' [42] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Xuan, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Lesser, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Zilberstein, “Communication decisions in multi-agent cooperation: Model and experiments,” in Proceedings of the Fifth International Conference on Autonomous Agents, ser.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' AGENTS ’01.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' New York, NY, USA: Association for Computing Machinery, 2001, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' 616–623.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Available: https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='1145/375735.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content='376469 [43] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Amato, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Dibangoye, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
+page_content=' Zilberstein, “Incremental policy generation for finite-horizon dec-pomdps,” in Nineteenth International Conference on Automated Planning and Scheduling, 2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'}
diff --git a/ndE3T4oBgHgl3EQf7Asa/content/tmp_files/2301.04794v1.pdf.txt b/ndE3T4oBgHgl3EQf7Asa/content/tmp_files/2301.04794v1.pdf.txt
new file mode 100644
index 0000000000000000000000000000000000000000..f4aacffdcfea7012d2f6905f240964c36142b582
--- /dev/null
+++ b/ndE3T4oBgHgl3EQf7Asa/content/tmp_files/2301.04794v1.pdf.txt
@@ -0,0 +1,1134 @@
+Springer Nature 2021 LATEX template
+LiteLSTM Architecture Based on Weights
+Sharing for Recurrent Neural Networks
+Nelly Elsayed1*, Zag ElSayed1 and Anthony S. Maida2
+1*School of Information Tecchnology, University of Cincinnati,
+2610 University Cir, Cincinnati, 45221, Ohio, United States.
+2School of Computing and Informatics, University of Louisiana at
+Lafayette, 301 E. Lewis Street, Lafayette, 70503, Louisiana,
+United States.
+*Corresponding author(s). E-mail(s): elsayeny@ucmail.uc.edu;
+Contributing authors: elsayezs@ucmail.uc.edu;
+maida@louisiana.edu;
+Abstract
+Long short-term memory (LSTM) is one of the robust recurrent neural
+network architectures for learning sequential data. However, it requires
+considerable computational power to learn and implement both software
+and hardware aspects. This paper proposed a novel LiteLSTM archi-
+tecture based on reducing the LSTM computation components via the
+weights sharing concept to reduce the overall architecture computation
+cost and maintain the architecture performance. The proposed LiteL-
+STM can be significant for processing large data where time-consuming
+is crucial while hardware resources are limited, such as the security
+of IoT devices and medical data processing. The proposed model was
+evaluated and tested empirically on three different datasets from the
+computer vision, cybersecurity, speech emotion recognition domains. The
+proposed LiteLSTM has comparable accuracy to the other state-of-the-
+art recurrent architecture while using a smaller computation budget.
+Keywords: LiteLSTM, weights sharing, LSTM, recurrent neural networks,
+IoT, MNIST
+1
+arXiv:2301.04794v1 [cs.LG] 12 Jan 2023
+
+Springer Nature 2021 LATEX template
+2
+LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks
+1 Introduction
+Sequential data modeling such as text, univariate and multivariate time series,
+audio signals, biological signals, spatiotemporal sequences (videos), amino acid
+amd genetic sequences requires an apparatus that can recognize the temporal
+dependencies and relationships within the sequential data. In the early 1980s,
+the recurrent neural network (RNN) was designed as the first neural network
+approach that targeted sequential data problems [1–3]. The RNN architec-
+ture can capture temporal dependencies due to the sense that it recursively
+integrates the current new input into its self-previous output [4]. Since it has
+an unrestricted but fading memory for the past, it can employ the tempo-
+ral dependencies to influence the learning of the structure within the data
+sequences [5]. The RNN has been applied in different research areas such as
+handwriting recognition [4, 6, 7], speech recognition [8–10], language model-
+ing [11–13], machine translation [14–16], action recognition [17–19], accident
+recognition [20–22], stock prediction [23–25], video classification [26, 27], intru-
+sion detection systems [28], time series prediction [29], and mental disorder
+prediction [30].
+However, the RNN has a significant weakness: its ability to learn long-
+term dependencies is limited due to the vanishing/exploding gradient problem.
+There are several attempts to solve the RNN major design problem and
+enhance its overall performance, as the RNN loses the ability to learn when
+the error gradient is corrupted. To solve the vanishing/exploding gradient,
+extensions to the RNN architecture require adding an internal state (memory)
+that enforces a constant error flow through the RNN architecture stage. This
+constant error flow enhances the robustness of the error gradient over longer
+time scales. In addition, a gated control over the content of this internal state
+(memory) is also needed [31].
+Nevertheless, this early LSTM model had significant weaknesses. When
+it was early designed by Hochreiter and Schmidhuber [31], the LSTM model
+input data was assumed to be prior segmented into subsequences with
+explicitly marked ends that the memory could reset between each irrever-
+ent subsequences processing [31, 32]. Moreover, this LSTM architecture did
+not have an internal reset component in case of processing continual input
+streams. Therefore, when the LSTM processes continuous input streams, the
+state action may grow infinitely and ultimately cause the LSTM architecture
+to fail [32].
+In 2000, [32] proposed a solution for the original LSTM problem that
+was proposed in [31]. [32] added a forget gate beside the input and output
+gates into the LSTM architecture that resets the LSTM memory when the
+input is diversely different from the memory content and helps to remove the
+unnecessary information that the LSTM memory carries through the time.
+This LSTM approach [32] is widely used to solve various problems such as
+speech recognition [8, 33–36], language modeling [13, 37–39], machine transla-
+tion [16, 40–42], time series classification [43, 44], image segmentation [45–47],
+and video prediction [40].
+
+Springer Nature 2021 LATEX template
+LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks
+3
+However, this model also has pivotal weaknesses. First, the architecture
+does not have a direct connection from the memory state to the forget, input,
+and output gates. Hence, there is no control from the memory to the gates that
+could assist in preventing the gradient from vanishing or exploding. Second,
+the Constant Error Carousel (CEC) does not have influential conduct over the
+forget and input gates when the output gate is closed (i.e. the output gate
+produces zero value output), which could negatively affect the model due to
+the lack of primary information flow within the model [48, 49].
+To handle these problems in the standard LSTM, in 2002, [48] added the
+peephole connections from the memory state cell to each of the LSTM forget,
+input, and output gates. The peephole connections allowed the memory state
+to exert some control over the gates, reinforcing the LSTM architecture and
+preventing the lack of information flow through the model during the situation
+that leads to the output gate being closed [48].
+The peephole added a generalization element to the standard LSTM [50].
+However, the major weakness of this architecture is that it becomes cost expen-
+sive due to the significant increase in the number of trainable parameters,
+memory, processing, and storage requirements to train the model and save the
+trained weights of the model and training time.
+However, there is still growing interest in studying and applying the LSTM
+architecture to solve various sequential problems in different research domains
+due to the LSTM outperforming the GRU in several tasks when problems
+have large training datasets [51]. Moreover, Greff et al. [51] proposed research
+in 2017 showed that the LSTM exceeds the GRU performance in language
+modeling-related tasks. On the other hand, in some problems where the train-
+ing datasets are small, the GRU outperforms the LSTM using a smaller
+computation budget [52].
+As the era of big data requires robust tools to manipulate large data pro-
+cessing. In addition, it requires accelerated, time-consuming tools to process
+the data. Moreover, as the world tries to reduce the Carbon (CO2) foot-
+print [53] by reducing the usage of high-performance hardware [54–57], the
+LSTM implementation requirements cost is considered one of the significant
+LSTM drawbacks.
+Spatiotemporal prediction problems are challenging to solve, utilizing only
+a gated recurrent architecture. Implementing such models is quite expen-
+sive from both resources and value aspects as a large number of parameters,
+rapid processors, large processing memory, and memory storage are needed.
+In addition, such models demand considerable time to train, validate and test.
+Moreover, implementing such a model for real-time training is a challenge.
+This paper attempts to evolve several computational aspects into a sophis-
+ticated performance level. This paper proposed a novel recurrent gated
+architecture using one gate: Lite Long Short-Term Memory (LiteLSTM). The
+proposed LiteLSTM employed the concept of sharing weight among the gates
+introduced in the GRU [52] to reduce the model computation budget. Also,
+it employs memory control over the gate using the peephole connection over
+
+Springer Nature 2021 LATEX template
+4
+LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks
+Fig. 1 The RNN basic architecture and its corresponding unfolded in time representa-
+tion [61].
+the one gate. Beside Compared to the LSTM, Peephole LSTM, and GRU,
+the LiteLSTM has a smaller computation budget and implementation require-
+ments, maintaining comparable accuracy. Due to its smaller computation
+budget, the LiteLSTM has a significant training time reduction compared to
+the LSTM. That allows the LiteLSTM to be implemented without a CO2
+footprint requirement.
+This paper is organized as follows: Section 2 provides a brief overview of
+the RNN, standard LSTM, peephole LSTM, and GRU architectures. Section 3
+provides the LiteLSTM architecture design concept details, Section 4 shows
+empirical results for LiteLSTM implementation on three applications from
+three different research domains: computer vision (using MNIST [58], cyberse-
+curity anomaly detection in IoT (IEEE IoT Network Intrusion Dataset) [59],
+and speech emotion recognition (TESS dataset [60]).
+2 Recurrent Neural Networks
+2.1 Basic RNN Architecture
+The recurrent neural network (RNN) basic architecture is shown in Figure 1.
+The left diagram shows the RNN architecture. The unfolded (unrolled) in time
+RNN representation is shown in the right diagram starting from the time step
+0 to time step t. The RNN is transformed into a feedforward network that
+can be trained by backpropagation. This algorithm is called backpropagation
+through time (BPTT) [62]. The RNN feeds its previous output vector h(t−1) at
+time step t − 1vand the current input vector x(t) to calculate the RNN output
+h(t) at the current time step t. This method allows the RNN to identify and
+utilize temporal information to influence learning in the data sequences.
+The basic RNN suffers from the vanishing/exploding gradient problem [63],
+limiting the model’s ability to learn long-term dependencies within the sequen-
+tial data. This is because the RNN does not have any element in its architecture
+design components that could maintain a constant error flow through the recur-
+rent model. The principle of adding gates as supporting components into the
+recurrent architecture was proposed to solve this problem.
+
+乡
+tanh
+tanh
+tanh
+tanh
+tanh
+↑
++t
+X
+X
++tSpringer Nature 2021 LATEX template
+LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks
+5
+Fig. 2 The standard LSTM unrolled architecture.
+At a given discrete time step t, the RNN output is calculated as follows:
+h(t) = tanh(Wx(t) + Uh(t−1) + b)
+(1)
+where x(t) is the RNN input at time step t. The h(t) and h(t−1) are the RNN
+outputs at time steps t and t − 1. The feedforward and recurrent weights are
+represented by W and U, respectively. The weights are shared across time
+steps. b is the RNN model bias.
+2.2 Standard Long Short-Term Memory (LSTM)
+Gers et al. [32] proposed the standard LSTM architecture in 2000 as an
+improved version of the first LSTM architecture, which was proposed in 1997
+by Hochreiter et al. [31]. This standard LSTM aimed to solve the continuous
+input stream problem, which allowed the memory state cell values to grow in
+an unbounded fashion, causing saturation of the output squashing (activation)
+function. Gers et al. [32] proposed to add an additional gate to the LSTM archi-
+tecture: forget gate f to reset the LSTM memory when the input is diversely
+different from the memory content and serves to remove the unnecessarily
+information that the LSTM memory holds through time.
+Figure 2 shows the standard LSTM unfolded architecture where c(t), h(t)
+are the memory state cell and LSTM output at time t, respectively. The symbol
+⊙ denotes the element-wise (Hadamard) multiplication [32, 64] and σ denotes
+the logistic sigmoid function. bi, bg, bf, and bo are the biases of each gate. W’s
+are the feedforward weights and U’s are the recurrent weights.
+The value of each component in the standard LSTM is calculated as follows:
+i(t) = σ(Wxix(t) + Uhih(t−1) + bi)
+(2)
+g(t) = tanh(Wxgx(t) + Uhgh(t−1) + bg)
+(3)
+f (t) = σ(Wxfx(t) + Uhfh(t−1) + bf)
+(4)
+o(t) = σ(Wxox(t) + Uhoh(t−1) + bo)
+(5)
+
+c(t-1)
+c(t)
+tanh
+h(t)
+tanh
++
++
++
+h (t-1)
+x(t)Springer Nature 2021 LATEX template
+6
+LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks
+Fig. 3 The standard LSTM unrolled architecture operation level that shows the compo-
+nents and their corresponding weights.
+c(t) = f (t) ⊙ c(t−1) + i(t) ⊙ g(t)
+(6)
+h(t) = tanh(c(t)) ⊙ q(t)
+(7)
+where i(t), f (t), and o(t) are the input, forget, and output gates, respectively.
+The gates are constrained to have activation values between zero and one to
+indicate their status: open, closed, partially open, or partially closed. g(t), is
+the input-update value. The model has two activation (squashing) units: input-
+update and output activation where the hyperbolic tangent tanh activation
+function is the preferable function to be used [65]. The memory cell state at
+time t is c(t) and the output of the LSTM unit at time t is h(t).
+Figure 3 shows the operation level of the standard LSTM where each com-
+ponent of the standard LSTM and its corresponding weights are given. The
+symbols × and ⊙ denote matrix multiplication and element-wise multiplica-
+tion, respectively.
+The standard LSTM architectue is widely used in various problem-solving
+tasks and applications in different research fields. However, its architecture has
+major drawbacks. First, there is no direct connection from the memory to the
+gates which leads to the absence of CEC control over the gates [48]. Second, if
+the output gate is closed, the CEC has no influence over the forget and input
+gates which could impair the model due to the lack of primary information
+flow within the model [48].
+2.3 The Peephole-Based LSTM
+Gers et al. [48] proposed in 2002 a solution for the standard LSTM major prob-
+lems. A new connection component has been added to the LSTM architecture
+named the peephole connection, in which data flow connection from the mem-
+ory state to each of the three LSTM gates to solve the standard LSTM main
+problems. The peephole connections allow the memory state value to exert
+
+c(t-1)
+c(t)
+tanh
+h (t)
+tanh
+X
+X
+X
+bf
+Wxf
+x(t) Uhf
+h(t-1) Wxg
+h(t-1) Wxo
+X(t) Uhi
+x(t) Uho
+h(t-1)Springer Nature 2021 LATEX template
+LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks
+7
+Fig. 4 Gers et al. [48] proposed peephole-based LSTM unrolled architecture.
+control over the LSTM three gates. This assists in preventing the vanishing
+and/or exploding gradient problem that the standard LSTM could face.
+Figure 5 shows the operation level of the peephole-based LSTM. The
+equations to calculate the peephole LSTM are as follows:
+i(t) = σ(Wxix(t) + Uhih(t−1) + Wsi ⊙ c(t−1) + bi)
+(8)
+g(t) = tanh(Wxgx(t) + Uhgh(t−1) + bg)
+(9)
+f (t) = σ(Wxfx(t) + Uhfh(t−1) + Wsf ⊙ c(t−1) + bf)
+(10)
+o(t) = σ(Wxox(t) + Uhoh(t−1) + Wso ⊙ c(t−1) + bo)
+(11)
+c(t) = f (t) ⊙ c(t−1) + i(t) ⊙ g(t)
+(12)
+h(t) = tanh(c(t)) ⊙ o(t)
+(13)
+where the symbol ⊙ denotes the elementwise (Hadamard) multiplication. Wci,
+Wcf, and Wco are the peephole connections weights between the memory state
+ct−1 and the input, forget, and output gates, respectively.
+Adding the peephole connection to the standard LSTM made the LSTM
+architecture a robust model to overcome the vanishing and/or exploding gra-
+dient problem. However, it caused a significant increase in the number of
+trainable parameters, training time, and memory requirements.
+2.4 Gated Recurrent Unit (GRU)
+The GRU model consists of two gates: the update gate z and the reset gate
+r, whereas the LSTM consists of three gates: input, output, and forget gates.
+In addition, the GRU does not contain the memory state cell that the LSTM
+model includes. Therefore, the GRU architecture is smaller than the LSTM
+by one gate and a memory state cell. The GRU integrates both the input gate
+and forget gate of the LSTM model into one update gate z [51], introducing
+the concept of the output of the same set of weights to reduce the model
+architecture. The unfolded GRU block architecture is shown in Figure 6.
+
+c(t-1)
+c(t)
+tanh
+h(t)
+tanh
++
+h(t-1)
+X(t)Springer Nature 2021 LATEX template
+8
+LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks
+Fig. 5 The operation level of the peephole-LSTM unrolled architecture where its compo-
+nents and their corresponding weights are presented.
+Fig. 6 The GRU unfolder architecture.
+The reset gate functionality operates similarly to the output gate of the
+LSTM. This GRU model eliminates the output squashing function, memory
+unit, and the CEC. The GRU yields a reduction in trainable parameters com-
+pared with the standard LSTM. However, this may lead to exploding and/or
+vanishing gradients.
+At time step, t, the GRU unit output, h(t), is calculated as follows [52]:
+z(t) = σ(Wzxx(t) + Uzhh(t−1) + bz)
+(14)
+r(t) = σ(Wrxx(t) + Urhh(t−1) + br)
+(15)
+˜h(t) = tanh(Wx(t) + U(r(t) ⊙ h(t−1)) + b)
+(16)
+h(t) = (1 − z(t)) ⊙ h(t−1) + z(t) ⊙ ˜h(t)
+(17)
+where the Wxz, Wxr, and Wx are the feedforward weights of the update gate
+z(t), the reset gate r(t), and the output candidate activation ˜h(t), respectively.
+
+c(t-1)
+c(t)
+tanh
+h(t)
+tanh
+Wef
+W.
+bi
+bf
+00
+Wxf
+Wxi
+Uni
+Ung
+x(t)
+h(t-1)
+Wxo
+h(t-1)
+X(t)
+Uno
+h(t-1)
+X(t)h(t-1)
+tanh
+h(t)
+0
+1
+r(t)
+h (t)
++(t)Springer Nature 2021 LATEX template
+LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks
+9
+Fig. 7 The operation level of the GRU architecture showing the weights of each component.
+Fig. 8 The LiteLSTM unrolled architecture. The single network gate (output indicated by
+σ) sends information flow to three locations that correspond to the outputs of the forget,
+input, and output gates of the standard LSTM.
+The recurrent weights are Uhz, Uhr, Uh for the update gate z(t), the reset gate
+r(t), and the output candidate activation ˜h(t), respectively. The biases of the
+update gate, reset gate, and the output candidate is denoted by bz, br, and
+b, respectively. σ is the logistic sigmoid function and tanh is the hyperbolic
+tangent function. The elementwise (Hadamard) multiplication is denoted by
+⊙. Figure 7 shows the operation level of the GRU architecture with weights
+and biases made explicit.
+3 LiteLSTM Architecture
+The proposed LiteLSTM aims to: reduce the overall implementation cost of
+the LSTM, solve the LSTM significant problems, and maintain a comparable
+accuracy performance to the LSTM. The proposed LiteLSTM architecture
+appears in Figure 8.
+
+Wx
+Xt
+1X
+b
+.t-1
+h
+t-1
+tanh
+Urh
+ht
+,X
++
+b,
+1Z
+t-1
+h
+Xh (t)
+(a)
+tanh
+0
+0
+tanh
+t-1
+h (t)
++(t)Springer Nature 2021 LATEX template
+10
+LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks
+Fig. 9 The operation level of the LiteLSTM architecture showing the weights of each
+component.
+The architecture of the LiteLSTM consists of only one trainable gated unit.
+We named the trainable gate the forget gate or network gate. This one gate
+behaves as a shared set of weights among the three gates of the standard LSTM
+gates. The LiteLSTM has a peephole connection from the memory state to the
+forget gate, which preserves the memory state from the LSTM and keeps the
+CEC to avoid vanishing and/or exploding gradients.
+Thus, the proposed LiteLSTM preserves the critical components of the
+LSTM as stated by [51] while reducing much of the parameter redundancy
+in the LSTM architecture. The LiteLSTM has a significant reduction in the
+number of trainable parameters that are required to implement the model.
+Therefore, the LiteLSTM reduced the training time, memory, and hard-
+ware requirements compared to the standard LSTM, peephole-based LSTM,
+and GRU architectures. Furthermore, the proposed LiteLSTM architecture
+preserves comparable prediction accuracy results to the LSTM. Figure 9
+shows a detailed architecture of the unrolled (unfolded) LiteLSTM assuming
+non-stacked input.
+The LiteLSTM block architecture contains only one trainable gate that
+compensates the elimination of the other two gates of the standard LSTM by
+sharing its trainable weights. The LiteLSTM preserves the memory cell of the
+standard LSTM to process long data sequences and maintains the CEC to
+manage the vanishing/exploding gradient problem.
+The LiteLSTM formulas are created as follows: During the forward pass
+within the LiteLSTM at time step t the total input (inp), inp(t), to the single
+forget gate f (t) is calculated by:
+inp(t) = [Wfx, Ufh, Wfc]
+�
+x(t), h(t−1), c(t−1)�
++ bf
+(18)
+where inp(t) ∈ Rη×1, and η × 1 is the of input vector inp(t). x(t) is the input
+at time t, x(t) ∈ Rη×1, h(t−1) is the output of the LiteLSTM architecture at
+time t − 1, and the memory state cell at time t − 1 denoted by c(t−1). Both
+h(t−1), c(t−1) ∈ Rη×1. Wfx, Ufh, and Wfc are the weight sets. All three weight
+
+c(t-1)
+c(t)
+tanh
+h(t)
+tanh
+0
+Wef
+X
+Wxf
+Uhf
+h(t-1)
+h(t-1) WSpringer Nature 2021 LATEX template
+LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks
+11
+Fig. 10 The logistic sigmoid function curve.
+Fig. 11 The hardSigmoid function curve.
+sets Wfx, Ufh, and Wfc and biases bf are trainable. The square brackets
+indicate stacking. We will let Wf = [Wfx, Ufh, Wfc]. In addition, we let If =
+�
+x(t), h(t−1), c(t−1)�
+.
+By applying a squashing function G to the net input as follows:
+f (t)
+gate = G(inp(t)).
+(19)
+Depending on the application, the squaching function G can be either the
+logistic sigmoid (σ) or hard sigmoid (hardSig) [66]. The logistic sigmoid is
+calculated by:
+σ(x) =
+ex
+ex + 1 =
+1
+1 + e−x ,
+(20)
+where x is a real number, x ∈ (−∞, ∞), and σ(x) has the range of (0, 1). The
+hard sigmoid (hardSig) is calculated by:
+hardSig(x) = max(min(0.25x + 0.5, 1), 0)
+(21)
+Figure 10 and Figure 11 shows the logistic sigmoid (σ) function and hard
+sigmoid (hardSig) function curves, respectively. The values of f t in Eqn. 19
+falls in the range (0, 1) or [0, 1], depending on using the logistic sigmoid (σ) or
+hard sigmoid function, respectively [65, 67]. Assuming that case of selection
+
+Sigmoid(x)
+0-5
+x
+-2
+-1.5
+-1
+-0.5
+0
+0.5
+1.5
+1
+2
+-0:5
+-1-hardSig(x)
+0-5
+x
+-2
+-1.5
+-1
+-0!5
+0
+0.5
+1
+1.5
+2
+-0:5
+-1Springer Nature 2021 LATEX template
+12
+LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks
+Table 1 Computational components comparison between the proposed LiteLSTM and
+the state-of-the-art recurrent architectures.
+Comparison
+RNN GRU LSTM pLSTM LiteLSTM
+Number of gates
+0
+2
+3
+3
+1
+Number of activations
+1
+1
+2
+2
+2
+State memory cell
+×
+×
+✓
+✓
+✓
+Peephole connection
+×
+×
+×
+✓
+✓
+Number of weight matrices
+2
+6
+8
+11
+6
+Number of elementwise multiplication 2
+3
+3
+6
+3
+Number of bias vectors
+1
+3
+4
+4
+2
+Sharing weights concept
+×
+✓
+×
+×
+✓
+the function as σ, the gate value f t is calculated by:
+f (t) = σ(WfIf + bf).
+(22)
+Selecting the logistic sigmoid or hard sigmoid functions is mainly based on the
+application. However, the hard sigmoid function is the preferred function to
+be used in the LiteLSTM gate to prevent the network gate from being closed
+(i.e., prevent the network gate from producing zero value output). The input
+update (memory activation) equation is calculated by:
+g(t) = tanh (WgIg + bg)
+(23)
+where Wg = [Wgx, Ugh], and Ig =
+�
+x(t), h(t−1)�
+. The dimension in Wg is
+matching the dimension of the Wf that maintains the dimension compatability
+within the architecture design.
+Finally, the Lite LSTM output is calculated by:
+c(t) = f (t) ⊙ c(t−1) + f (t) ⊙ g(t)
+(24)
+h(t) = f (t) ⊙ tanh(c(t))
+(25)
+Table 1 shows a comparison between the architecture design and computa-
+tion components of the RNN, GRU, standard LSTM, peephole-based LSTM
+(pLSTM), and the proposed LiteLSTM.
+4 Emperical Evaluatuation and Analysis
+In this paper, the LiteLSTM has been empirically tested and evaluated in
+three research domains: computer vision, anomaly detection in IoT, and speech
+emotion recognition. The MNIST [58] has been used as the computer vision
+experiment dataset, and the IEEE IoT Network Intrusion Dataset [59] is used
+for anomaly detection in IoT tasks. We used an Intel(R) Core(YM) i7-9700
+CPU @3.00GHZ, 3000 Mhz processor, Microsoft Windows 10 OS, and 32 GB
+memory computer machine to perform our experiments. We used Python 3.7.6,
+Keras 2.0.4, and Tensorflow 1.15.0.
+
+Springer Nature 2021 LATEX template
+LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks
+13
+Fig. 12
+The accuracy diagrams of the recurrent architectures and LiteLSTM using MNIST
+dataset.
+Table 2 Accuracy comparision between the LiteLSTM and the state-of-the-art recurrent
+architectures using MNIST dataset
+Comparision
+RNN
+GRU
+LSTM pLSTM LiteLSTM
+Time(m)
+11.24
+43.01
+60.36
+75.45
+42.94
+Parameters
+792,210 812,610 822,810 833,010
+812,610
+Accuracy(%) 67.64%
+94.09%
+95.70%
+95.99%
+96.07%
+The first empirical evaluation of the LiteLSTM was performed using the
+MNIST dataset, which consists of 70, 000 images of handwritten digits between
+0 and 9. The dataset is split into 60, 000 data samples for training and 10, 000
+data samples for testing [68]. The MNIST images were centered in a 28×28
+image by computing the center of mass of the pixels. The model set 64-two
+layered architecture followed by a Softmax layer. For the training process, the
+batch size was set to 128 and the number of epochs to 20. The Adam optimizer
+with learning rate 10−3, β1 = 0.9, β2 = 0.999, and ϵ = 1e − 07. Table 2 shows
+the accuracy results of the different recurrent architectures and the LiteLSTM,
+where the time is measured in minutes. The RNN shows a significantly shorter
+training time. However, it has the lowest performance compared to the other
+recurrent architectures. The LiteLSTM shows an improvement in accuracy
+compared to the other recurrent architectures. Figure 12 shows the accuracy
+plots for each of the LiteLSTM and the state-of-the-art recurrent models.
+The second empirical evaluation of the LiteLSTM was performed using the
+IEEE IoT Network Intrusion Dataset. The dataset consists of 42 raw network
+packet files (pcap) at different time points. The IoT devices, namely SKT
+NUGU (NU 100) and EZVIZ Wi-Fi camera (C2C Mini O Plus 1080P) were
+used to generate traffic for IoT devices. The data contains normal traffic flow
+and different types of cyberattacks, namely: ARP spoofing attack, DoS (SYN
+flooding) attack, scan (host and port scan) attack, scan(port and OS scan)
+attack, (UDP/ACK/HTTP Flooding) of zombie PC compromised by Mirai
+malware, Mirai-ACK flooding attack, Mirai-HTTP flooding attack, and Telnet
+brute-force attack. In our experiments, we used a dataset to experiment with
+the LiteLSTM twice: first, to detect whether an attack occurred or not (as a
+binary dataset), and another experiment to detect the type of attack. We set
+the batch size to 32 and the number of epochs to 20. Table 3 shows the binary
+experimental results for the LiteLSTM and the recurrent architectures. Table 4
+
+(a) RNN accuracy
+(b) LSTM accuracy
+(c) GRU accuracy
+(d) LiteLSTM accuracy
+19
+09
+19
+cs
+s
+train
+train
+ train
+train
+validation
+validation
+validaticn
+validation
+epoch
+epoch
+epoch
+epochSpringer Nature 2021 LATEX template
+14
+LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks
+Table 3 Accuracy comparision between the LiteLSTM and the state-of-the-art recurrent
+architectures using IEEE IoT Network Intrusion Binary Dataset
+Comparison
+RNN GRU
+LSTM pLSTM LiteLSTM
+Time (m)
+20.26 43.27
+41.51
+51.21
+28.44
+Precision
+0.8144 0.9328
+0.9422
+0.9653
+0.9382
+Recall
+0.9763 0.9757
+0.9484
+0.9545
+0.9834
+F1-score
+88.80
+91.34
+95.97
+95.99
+0.9603
+Accuracy(%) 98.7%
+99.51% 99.50% 99.56%
+99.60%
+Table 4 Accuracy comparison between the LiteLSTM and the state-of-the-art recurrent
+architectures using IEEE IoT Network Intrusion Detection for Multiple Classes
+Cyberattacks Dataset.
+Comparison
+RNN
+GRU
+LSTM pLSTM LiteLSTM
+Time (m)
+19.98
+42.79
+50.41
+59.96
+29.31
+Precision
+0.8875
+0.8991
+0.9461
+0.9249
+0.8999
+Recall
+0.8418
+0.8300
+0.7898
+0.8086
+0.8318
+F1-score
+0.8640
+0.8632
+0.8609
+0.8628
+0.8645
+Accuracy(%) 83.35% 86.70% 86.90% 87.03%
+87.10%
+Fig. 13 The accuracy diagrams of the recurrent architectures and LiteLSTM using Toronto
+Emotion Speech Set (TESS) dataset.
+shows the detection results of the LiteLSTM and the recurrent architectures
+for detecting different types of cyberattacks.
+The third empirical evaluation of the LiteLSTM was performed on a voice
+(audio) emotion recognition task. For this purpose, we used the Toronto Emo-
+tional Speech Set (TESS) [60], which is one of the emotion recognition dataset
+benchmarks that has been used in several emotion recognition applications
+and tasks [69–71]. This dataset consists of 2800 stimuli and has seven different
+emotion categories: anger, disgust, fear, happiness, pleasant/surprise, sadness,
+and neutral. The major significance of this dataset is that the distribution
+between the number of stimuli per emotion category is equally likely [60]. Sim-
+ilar to the previous experiments, we tested the proposed LiteLSTM with the
+other recurrent neural network architectures. For this empirical evaluation, we
+used the model described [69], which used the GRU as the learning model. We
+replaced the GRU with LiteLSTM, peephole LSTM, and RNN and evaluated
+the model performance each time. The dataset has been split into training,
+testing, and validation sets with a ratio of 70%, 20%, and 10%, respectively.
+
+(a) RNN accuracy
+(b) GRU accuracy
+(c) LSTM accuracy
+(d) pLSTM accuracy
+(e) LiteLSTM accuracy
+10 -
+LC
+10 -
+1.0
+60
+0.9
+80
+0.8
+0.8
+L'0
+0.7
+0.7
+0.3
+0.3
+E0
+一 train
+0.2
+train
+ 0.2
+train
+0.2
+ train
+ train
+validation
+0.1
+validation
+0.1
+ validation
+01
+validation
+validation
+0.0
+0.0 1
+0
+10
+20
+10
+0
+10
+epoch
+epoch
+0
+epoch
+epochSpringer Nature 2021 LATEX template
+LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks
+15
+Table 5 Accuracy comparison between the LiteLSTM and the state-of-the-art recurrent
+architectures using the Toronto Emotional Speech Set (TESS).
+Comparison
+RNN
+GRU
+LSTM
+pLSTM LiteLSTM
+Time (m)
+79.56
+171.16
+201.64
+239.84
+117.24
+Precision
+0.9312
+0.9428
+0.9686
+0.9898
+0.9799
+Recall
+0.9546
+0.9429
+0.9026
+0.9214
+0.9446
+F1-score
+0.9427
+0.9428
+0.9344
+0.9543
+0.9619
+Accuracy(%) 92.163% 94.285% 95.147% 95.534%
+95.989%
+Table 5 shows the empirical result of the proposed LiteLSTM and the recur-
+rent architectures for emotion recognition from speech. Figure 13 shows the
+training versus validation accuracies for each of the recurrent architectures and
+LiteLSTM using Toronto Emotion Speech Set (TESS) dataset.
+5 Conclusion
+The proposed LiteLSTM architecture novelty lies in the following aspects.
+First, the LiteLSTM consists of one gate that serves as a multifunctional
+gate via the weights-sharing concept. Thus, the overall number of train-
+ing parameters is reduced by approximately one-third of the LSTM or the
+peephole-LSTM. In addition, maintaining the peephole connection from the
+memory state cell to the existing gate maintains the control of the memory over
+the gate in contrast to the LSTM. Therefore, the LiteLSTM handles the van-
+ishing/exploding gradient problem.The overall budget for implementing the
+LiteLSTM, including the training time, memory footprint, memory storage,
+and processing power, is smaller than the LSTM by approximately one-third.
+We empirically evaluated the LiteLSTM using three datasets: MNIST, IEEE
+IoT Network Intrusion Detection datasets, and TESS speech emotion recog-
+nition dataset. The proposed LiteLSTM shows comparable results to the
+LSTM using a smaller computation budget. Due to the optimized LiteLSTM
+architecture design, we were able to complete the empirical tasks using a
+computer processor without involving the GPU in the computational process.
+Thus, the LiteLSTM architecture helps to reduce the CO2 footprint. The pro-
+posed LiteLSTM architecture is an attractive candidate for future hardware
+implementation on small and portable devices, especially IoT devices.
+Statements and Declarations
+• Funding: N/A
+• Conflict of interest/Competing interests: The authors declare that
+they have no conflict of interest.
+• The authors did not receive support from any organization for the submitted
+work.
+• All authors certify that they have no affiliations with or involvement in any
+organization or entity with any financial interest or non-financial interest in
+the subject matter or materials discussed in this manuscript.
+
+Springer Nature 2021 LATEX template
+16
+LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks
+• The authors have no financial or proprietary interests in any material
+discussed in this article.
+References
+[1] Bourlard, H., Wellekens, C.J.: Speech dynamics and recurrent neural
+networks. In: International Conference on Acoustics, Speech, and Signal
+Processing,, pp. 33–36 (1989). IEEE
+[2] Siegelmann, H.T.: Recurrent neural networks. Computer Science Today,
+29–45 (1995)
+[3] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning (2016). http://
+www.deeplearningbook.org
+[4] Graves, A., Liwicki, M., Fern´andez, S., Bertolami, R., Bunke, H., Schmid-
+huber, J.: A novel connectionist system for unconstrained handwriting
+recognition. IEEE Transactions on Pattern Analysis and Machine Intelli-
+gence 31(5), 855–868 (2009)
+[5] Elsayed, N.: Gated convolutional recurrent neural networks for predictive
+coding (2019)
+[6] Stuner, B., Chatelain, C., Paquet, T.: Handwriting recognition using
+cohort of lstm and lexicon verification with extremely large lexicon.
+Multimedia Tools and Applications 79(45), 34407–34427 (2020)
+[7] Carbune, V., Gonnet, P., Deselaers, T., Rowley, H.A., Daryin, A., Calvo,
+M., Wang, L.-L., Keysers, D., Feuz, S., Gervais, P.: Fast multi-language
+lstm-based online handwriting recognition. International Journal on Doc-
+ument Analysis and Recognition (IJDAR) 23(2), 89–102 (2020)
+[8] Sak, H., Senior, A., Beaufays, F.: Long short-term memory recurrent
+neural network architectures for large scale acoustic modeling. In: Fif-
+teenth Annual Conference of the International Speech Communication
+Association (2014)
+[9] Graves, A., Mohamed, A.-r., Hinton, G.E.: Speech recognition with
+deep recurrent neural networks. 2013 IEEE International Conference on
+Acoustics, Speech and Signal Processing, 6645–6649 (2013)
+[10] Zeyer, A., Doetsch, P., Voigtlaender, P., Schl¨uter, R., Ney, H.: A com-
+prehensive study of deep bidirectional lstm rnns for acoustic modeling in
+speech recognition. In: 2017 IEEE International Conference on Acoustics,
+Speech and Signal Processing (ICASSP), pp. 2462–2466 (2017). IEEE
+[11] Mikolov, T., Karafi´at, M., Burget, L., ˇCernock`y, J., Khudanpur, S.:
+
+Springer Nature 2021 LATEX template
+LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks
+17
+Recurrent neural network based language model. In: Eleventh Annual
+Conference of the International Speech Communication Association
+(2010)
+[12] Mikolov, T., Kombrink, S., Burget, L., ˇCernock`y, J., Khudanpur, S.:
+Extensions of recurrent neural network language model. In: Acous-
+tics, Speech and Signal Processing (ICASSP), 2011 IEEE International
+Conference On, pp. 5528–5531 (2011). IEEE
+[13] Sundermeyer, M., Schl¨uter, R., Ney, H.: Lstm neural networks for lan-
+guage modeling. In: Thirteenth Annual Conference of the International
+Speech Communication Association (2012)
+[14] Ren, B.: The use of machine translation algorithm based on residual and
+lstm neural network in translation teaching. Plos one 15(11), 0240663
+(2020)
+[15] Bridle, J.S.: Alpha-nets: A recurrent ‘neural’network architecture with a
+hidden markov model interpretation. Speech Communication 9(1), 83–92
+(1990)
+[16] Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly
+learning to align and translate. arXiv preprint arXiv:1409.0473 (2014)
+[17] Du, Y., Wang, W., Wang, L.: Hierarchical recurrent neural network for
+skeleton based action recognition. In: Proceedings of the IEEE Conference
+on Computer Vision and Pattern Recognition, pp. 1110–1118 (2015)
+[18] Ullah, A., Ahmad, J., Muhammad, K., Sajjad, M., Baik, S.W.: Action
+recognition in video sequences using deep bi-directional lstm with cnn
+features. IEEE access 6, 1155–1166 (2017)
+[19] Adewopo, V., Elsayed, N., Anderson, K.: Baby physical safety moni-
+toring in smart home using action recognition system. arXiv preprint
+arXiv:2210.12527 (2022)
+[20] Bortnikov, M., Khan, A., Khattak, A.M., Ahmad, M.: Accident recog-
+nition via 3d cnns for automated traffic monitoring in smart cities. In:
+Science and Information Conference, pp. 256–264 (2019). Springer
+[21] Adewopo, V., Elsayed, N., ElSayed, Z., Ozer, M., Abdelgawad, A., Bay-
+oumi, M.: Review on action recognition for accident detection in smart
+city transportation systems. arXiv preprint arXiv:2208.09588 (2022)
+[22] Fatima, M., Khan, M.U.K., Kyung, C.-M.: Global feature aggregation for
+accident anticipation. In: 2020 25th International Conference on Pattern
+Recognition (ICPR), pp. 2809–2816 (2021). IEEE
+
+Springer Nature 2021 LATEX template
+18
+LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks
+[23] Kamijo, K.-i., Tanigawa, T.: Stock price pattern recognition-a recur-
+rent neural network approach. In: Neural Networks, 1990., 1990 IJCNN
+International Joint Conference On, pp. 215–221 (1990). IEEE
+[24] Elsayed, N., Zaghloul, Z.S., Azumah, S.W., Li, C.: Intrusion detection
+system in smart home network using bidirectional lstm and convolu-
+tional neural networks hybrid model. In: 2021 IEEE International Midwest
+Symposium on Circuits and Systems (MWSCAS), pp. 55–58 (2021).
+IEEE
+[25] Azumah, S.W., Elsayed, N., Adewopo, V., Zaghloul, Z.S., Li, C.: A deep
+lstm based approach for intrusion detection iot devices network in smart
+home. In: 2021 IEEE 7th World Forum on Internet of Things (WF-IoT),
+pp. 836–841 (2021). IEEE
+[26] Yang, Y., Krompass, D., Tresp, V.: Tensor-train recurrent neural networks
+for video classification. In: International Conference on Machine Learning,
+pp. 3891–3900 (2017). PMLR
+[27] Ogawa, T., Sasaka, Y., Maeda, K., Haseyama, M.: Favorite video
+classification based on multimodal bidirectional lstm. IEEE Access 6,
+61401–61409 (2018)
+[28] Debar, H., Dorizzi, B.: An application of a recurrent network to an intru-
+sion detection system. In: [Proceedings 1992] IJCNN International Joint
+Conference on Neural Networks, vol. 2, pp. 478–483 (1992). IEEE
+[29] Han, M., Xi, J., Xu, S., Yin, F.-L.: Prediction of chaotic time series based
+on the recurrent predictor neural network. IEEE Transactions on Signal
+Processing 52(12), 3409–3416 (2004)
+[30] Petrosian, A., Prokhorov, D., Lajara-Nanson, W., Schiffer, R.: Recurrent
+neural network-based approach for early recognition of alzheimer’s disease
+in EEG. Clinical Neurophysiology 112(8), 1378–1387 (2001)
+[31] Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Com-
+putation 9(8), 1735–1780 (1997)
+[32] Gers, F.A., Schmidhuber, J., Cummins, F.: Learning to forget: Continual
+prediction with LSTM. Neural Computation, 2451–2471 (2000)
+[33] Soltau, H., Liao, H., Sak, H.: Neural speech recognizer: Acoustic-to-word
+LSTM model for large vocabulary speech recognition. arXiv preprint
+arXiv:1610.09975 (2016)
+[34] Chorowski, J., Bahdanau, D., Cho, K., Bengio, Y.: End-to-end continuous
+speech recognition using attention-based recurrent NN: first results. arXiv
+
+Springer Nature 2021 LATEX template
+LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks
+19
+preprint arXiv:1412.1602 (2014)
+[35] Miao, Y., Gowayyed, M., Metze, F.: EESEN: End-to-end speech recogni-
+tion using deep RNN models and WFST-based decoding. In: Automatic
+Speech Recognition and Understanding (ASRU), 2015 IEEE Workshop
+On, pp. 167–174 (2015). IEEE
+[36] Graves, A., Jaitly, N., Mohamed, A.-r.: Hybrid speech recognition with
+deep bidirectional LSTM. In: Automatic Speech Recognition and Under-
+standing (ASRU), 2013 IEEE Workshop On, pp. 273–278 (2013). IEEE
+[37] Merity, S., Keskar, N.S., Socher, R.: Regularizing and optimizing LSTM
+language models. arXiv preprint arXiv:1708.02182 (2017)
+[38] Sutskever, I., Vinyals, O., Le, Q.V.: Sequence to sequence learning with
+neural networks. In: Advances in Neural Information Processing Systems,
+pp. 3104–3112 (2014)
+[39] Miyamoto, Y., Cho, K.: Gated word-character recurrent language model.
+arXiv preprint arXiv:1606.01700 (2016)
+[40] Cho, K., Van Merri¨enboer, B., Bahdanau, D., Bengio, Y.: On the prop-
+erties of neural machine translation: Encoder-decoder approaches. arXiv
+preprint arXiv:1409.1259 (2014)
+[41] Luong, M.-T., Sutskever, I., Le, Q.V., Vinyals, O., Zaremba, W.: Address-
+ing the rare word problem in neural machine translation. arXiv preprint
+arXiv:1410.8206 (2014)
+[42] Luong, M.-T., Manning, C.D.: Stanford neural machine translation sys-
+tems for spoken language domains. In: Proceedings of the International
+Workshop on Spoken Language Translation, pp. 76–79 (2015)
+[43] Karim, F., Majumdar, S., Darabi, H., Chen, S.: LSTM fully convolutional
+networks for time series classification. IEEE Access 6, 1662–1669 (2018)
+[44] Karim, F., Majumdar, S., Darabi, H., Harford, S.: Multivariate LSTM-
+FCNs for time series classification. arXiv preprint arXiv:1801.04503
+(2018)
+[45] Stollenga, M.F., Byeon, W., Liwicki, M., Schmidhuber, J.: Parallel multi-
+dimensional LSTM, with application to fast biomedical volumetric image
+segmentation. In: Advances in Neural Information Processing Systems,
+pp. 2998–3006 (2015)
+[46] Chen, L.-C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L.:
+Deeplab: Semantic image segmentation with deep convolutional nets,
+
+Springer Nature 2021 LATEX template
+20
+LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks
+atrous convolution, and fully connected crfs. IEEE Transactions on
+Pattern Analysis and Machine Intelligence 40(4), 834–848 (2018)
+[47] Reiter, S., Schuller, B., Rigoll, G.: A combined LSTM-RNN-HMM-
+approach for meeting event segmentation and recognition. In: Acoustics,
+Speech and Signal Processing, 2006. ICASSP 2006 Proceedings. 2006
+IEEE International Conference On, vol. 2, p. (2006). IEEE
+[48] Gers, F.A., Schraudolph, N.N., Schmidhuber, J.: Learning precise timing
+with LSTM recurrent networks. Journal of Machine Learning Research 3,
+115–143 (2002)
+[49] Gers, F.A., Schmidhuber, J.: Recurrent nets that time and count. In:
+Proceedings of the IEEE-INNS-ENNS International Joint Conference on
+Neural Networks. IJCNN 2000. Neural Computing: New Challenges and
+Perspectives for the New Millennium, vol. 3, pp. 189–194 (2000). IEEE
+[50] Elsayed, N., Maida, A.S., Bayoumi, M.: Reduced-gate convolutional long
+short-term memory using predictive coding for spatiotemporal prediction.
+Computational Intelligence 36(3), 910–939 (2020)
+[51] Greff, K., Srivastava, R.K., Koutn´ık, J., Steunebrink, B.R., Schmidhuber,
+J.: LSTM: A search space odyssey. IEEE Transactions on Neural Networks
+and Learning Systems 28(10), 2222–2232 (2017)
+[52] Chung, J., Gulcehre, C., Cho, K., Bengio, Y.: Empirical evaluation of
+gated recurrent neural networks on sequence modeling. arXiv preprint
+arXiv:1412.3555 (2014)
+[53] Bocken, N.M., Allwood, J.M.: Strategies to reduce the carbon foot-
+print of consumer goods by influencing stakeholders. Journal of Cleaner
+Production 35, 118–129 (2012)
+[54] Calza, F., Parmentola, A., Tutore, I.: Types of green innovations: Ways of
+implementation in a non-green industry. Sustainability 9(8), 1301 (2017)
+[55] Zaghloul, Z.S., Elsayed, N., Li, C., Bayoumi, M.: Green iot system archi-
+tecture for applied autonomous network cybersecurity monitoring. In:
+2021 IEEE 7th World Forum on Internet of Things (WF-IoT), pp. 628–632
+(2021). IEEE
+[56] Al Haddad, M., ElSayed, Z., Bayoumi, M.: Green arithmetic logic unit.
+In: 2012 International Conference on Energy Aware Computing, pp. 1–4
+(2012). IEEE
+[57] ElSayed, Z., Elsayed, N., Li, C., Bayoumi, M.: Autonomous low power
+iot system architecture for cybersecurity monitoring. arXiv e-prints, 2106
+
+Springer Nature 2021 LATEX template
+LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks
+21
+(2021)
+[58] LeCun, Y.: The mnist database of handwritten digits. http://yann. lecun.
+com/exdb/mnist/ (1998)
+[59] Kang, H., Ahn, D.H., Lee, G.M., Yoo, J.D., Park, K.H., Kim, H.K.: IoT
+Network Intrusion Dataset. https://doi.org/10.21227/q70p-q449. https:
+//dx.doi.org/10.21227/q70p-q449
+[60] Dupuis, K., Pichora-Fuller, M.K.: Toronto emotional speech set (TESS)-
+younger talker happy (2010)
+[61] Olah,
+C.:
+Understanding
+LSTM
+Networks.
+http://colah.github.io/posts/2015-08-Understanding-LSTMs/ (2015)
+[62] Werbos, P.J.: Backpropagation through time: what it does and how to do
+it. Proceedings of the IEEE 78(10), 1550–1560 (1990)
+[63] Ceni, A., Ashwin, P., Livi, L.: Interpreting RNN behaviour via excitable
+network attractors (1807)
+[64] Elsayed, N., Maida, A.S., Bayoumi, M.: Reduced-gate convolutional lstm
+architecture for next-frame video prediction using predictive coding. In:
+2019 International Joint Conference on Neural Networks (ijcnn), pp. 1–9
+(2019). IEEE
+[65] Elsayed, N., Maida, A.S., Bayoumi, M.: Empirical activation function
+effects on unsupervised convolutional lstm learning. In: 2018 IEEE 30th
+International Conference on Tools with Artificial Intelligence (ICTAI),
+pp. 336–343 (2018). IEEE
+[66] Gulcehre, C., Moczulski, M., Denil, M., Bengio, Y.: Noisy activation func-
+tions. In: International Conference on Machine Learning, pp. 3059–3068
+(2016)
+[67] Elsayed, N., Maida, A., Bayoumi, M.: Effects of different activation
+functions for unsupervised convolutional lstm spatiotemporal learning.
+Advances in Science, Technology and Engineering Systems Journal 4(2),
+260–269 (2019)
+[68] Elsayed, N., ElSayed, Z., Maida, A.S.: Litelstm architecture for deep
+recurrent neural networks. arXiv preprint arXiv:2201.11624 (2022)
+[69] Elsayed, N., ElSayed, Z., Asadizanjani, N., Ozer, M., Abdelgawad, A.,
+Bayoumi, M.: Speech emotion recognition using supervised deep recurrent
+system for mental health monitoring. arXiv preprint arXiv:2208.12812
+(2022)
+
+Springer Nature 2021 LATEX template
+22
+LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks
+[70] Gokilavani, M., Katakam, H., Basheer, S.A., Srinivas, P.: Ravdness,
+crema-d, tess based algorithm for emotion recognition using speech.
+In: 2022 4th International Conference on Smart Systems and Inventive
+Technology (ICSSIT), pp. 1625–1631 (2022). IEEE
+[71] Parry, J., Palaz, D., Clarke, G., Lecomte, P., Mead, R., Berger, M., Hofer,
+G.: Analysis of deep learning architectures for cross-corpus speech emotion
+recognition. In: Interspeech, pp. 1656–1660 (2019)
+
diff --git a/ndE3T4oBgHgl3EQf7Asa/content/tmp_files/load_file.txt b/ndE3T4oBgHgl3EQf7Asa/content/tmp_files/load_file.txt
new file mode 100644
index 0000000000000000000000000000000000000000..4ba1955a302cd93d5377f4ebf63bd3f29642277c
--- /dev/null
+++ b/ndE3T4oBgHgl3EQf7Asa/content/tmp_files/load_file.txt
@@ -0,0 +1,838 @@
+filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf,len=837
+page_content='Springer Nature 2021 LATEX template LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks Nelly Elsayed1*, Zag ElSayed1 and Anthony S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Maida2 1*School of Information Tecchnology, University of Cincinnati, 2610 University Cir, Cincinnati, 45221, Ohio, United States.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' 2School of Computing and Informatics, University of Louisiana at Lafayette, 301 E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Lewis Street, Lafayette, 70503, Louisiana, United States.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Corresponding author(s).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' E-mail(s): elsayeny@ucmail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='uc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='edu;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Contributing authors: elsayezs@ucmail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='uc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='edu;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' maida@louisiana.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='edu;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Abstract Long short-term memory (LSTM) is one of the robust recurrent neural network architectures for learning sequential data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' However, it requires considerable computational power to learn and implement both software and hardware aspects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' This paper proposed a novel LiteLSTM archi- tecture based on reducing the LSTM computation components via the weights sharing concept to reduce the overall architecture computation cost and maintain the architecture performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' The proposed LiteL- STM can be significant for processing large data where time-consuming is crucial while hardware resources are limited, such as the security of IoT devices and medical data processing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' The proposed model was evaluated and tested empirically on three different datasets from the computer vision, cybersecurity, speech emotion recognition domains.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' The proposed LiteLSTM has comparable accuracy to the other state-of-the- art recurrent architecture while using a smaller computation budget.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Keywords: LiteLSTM, weights sharing, LSTM, recurrent neural networks, IoT, MNIST 1 arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='04794v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='LG] 12 Jan 2023 Springer Nature 2021 LATEX template 2 LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks 1 Introduction Sequential data modeling such as text, univariate and multivariate time series, audio signals, biological signals, spatiotemporal sequences (videos), amino acid amd genetic sequences requires an apparatus that can recognize the temporal dependencies and relationships within the sequential data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' In the early 1980s, the recurrent neural network (RNN) was designed as the first neural network approach that targeted sequential data problems [1–3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' The RNN architec- ture can capture temporal dependencies due to the sense that it recursively integrates the current new input into its self-previous output [4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Since it has an unrestricted but fading memory for the past, it can employ the tempo- ral dependencies to influence the learning of the structure within the data sequences [5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' The RNN has been applied in different research areas such as handwriting recognition [4, 6, 7], speech recognition [8–10], language model- ing [11–13], machine translation [14–16], action recognition [17–19], accident recognition [20–22], stock prediction [23–25], video classification [26, 27], intru- sion detection systems [28], time series prediction [29], and mental disorder prediction [30].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' However, the RNN has a significant weakness: its ability to learn long- term dependencies is limited due to the vanishing/exploding gradient problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' There are several attempts to solve the RNN major design problem and enhance its overall performance, as the RNN loses the ability to learn when the error gradient is corrupted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' To solve the vanishing/exploding gradient, extensions to the RNN architecture require adding an internal state (memory) that enforces a constant error flow through the RNN architecture stage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' This constant error flow enhances the robustness of the error gradient over longer time scales.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' In addition, a gated control over the content of this internal state (memory) is also needed [31].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Nevertheless, this early LSTM model had significant weaknesses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' When it was early designed by Hochreiter and Schmidhuber [31], the LSTM model input data was assumed to be prior segmented into subsequences with explicitly marked ends that the memory could reset between each irrever- ent subsequences processing [31, 32].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Moreover, this LSTM architecture did not have an internal reset component in case of processing continual input streams.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Therefore, when the LSTM processes continuous input streams, the state action may grow infinitely and ultimately cause the LSTM architecture to fail [32].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' In 2000, [32] proposed a solution for the original LSTM problem that was proposed in [31].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' [32] added a forget gate beside the input and output gates into the LSTM architecture that resets the LSTM memory when the input is diversely different from the memory content and helps to remove the unnecessary information that the LSTM memory carries through the time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' This LSTM approach [32] is widely used to solve various problems such as speech recognition [8, 33–36], language modeling [13, 37–39], machine transla- tion [16, 40–42], time series classification [43, 44], image segmentation [45–47], and video prediction [40].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Springer Nature 2021 LATEX template LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks 3 However, this model also has pivotal weaknesses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' First, the architecture does not have a direct connection from the memory state to the forget, input, and output gates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Hence, there is no control from the memory to the gates that could assist in preventing the gradient from vanishing or exploding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Second, the Constant Error Carousel (CEC) does not have influential conduct over the forget and input gates when the output gate is closed (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' the output gate produces zero value output), which could negatively affect the model due to the lack of primary information flow within the model [48, 49].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' To handle these problems in the standard LSTM, in 2002, [48] added the peephole connections from the memory state cell to each of the LSTM forget, input, and output gates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' The peephole connections allowed the memory state to exert some control over the gates, reinforcing the LSTM architecture and preventing the lack of information flow through the model during the situation that leads to the output gate being closed [48].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' The peephole added a generalization element to the standard LSTM [50].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' However, the major weakness of this architecture is that it becomes cost expen- sive due to the significant increase in the number of trainable parameters, memory, processing, and storage requirements to train the model and save the trained weights of the model and training time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' However, there is still growing interest in studying and applying the LSTM architecture to solve various sequential problems in different research domains due to the LSTM outperforming the GRU in several tasks when problems have large training datasets [51].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Moreover, Greff et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' [51] proposed research in 2017 showed that the LSTM exceeds the GRU performance in language modeling-related tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' On the other hand, in some problems where the train- ing datasets are small, the GRU outperforms the LSTM using a smaller computation budget [52].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' As the era of big data requires robust tools to manipulate large data pro- cessing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' In addition, it requires accelerated, time-consuming tools to process the data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Moreover, as the world tries to reduce the Carbon (CO2) foot- print [53] by reducing the usage of high-performance hardware [54–57], the LSTM implementation requirements cost is considered one of the significant LSTM drawbacks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Spatiotemporal prediction problems are challenging to solve, utilizing only a gated recurrent architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Implementing such models is quite expen- sive from both resources and value aspects as a large number of parameters, rapid processors, large processing memory, and memory storage are needed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' In addition, such models demand considerable time to train, validate and test.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Moreover, implementing such a model for real-time training is a challenge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' This paper attempts to evolve several computational aspects into a sophis- ticated performance level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' This paper proposed a novel recurrent gated architecture using one gate: Lite Long Short-Term Memory (LiteLSTM).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' The proposed LiteLSTM employed the concept of sharing weight among the gates introduced in the GRU [52] to reduce the model computation budget.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Also, it employs memory control over the gate using the peephole connection over Springer Nature 2021 LATEX template 4 LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' 1 The RNN basic architecture and its corresponding unfolded in time representa- tion [61].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' the one gate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Beside Compared to the LSTM, Peephole LSTM, and GRU, the LiteLSTM has a smaller computation budget and implementation require- ments, maintaining comparable accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Due to its smaller computation budget, the LiteLSTM has a significant training time reduction compared to the LSTM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' That allows the LiteLSTM to be implemented without a CO2 footprint requirement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' This paper is organized as follows: Section 2 provides a brief overview of the RNN, standard LSTM, peephole LSTM, and GRU architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Section 3 provides the LiteLSTM architecture design concept details, Section 4 shows empirical results for LiteLSTM implementation on three applications from three different research domains: computer vision (using MNIST [58], cyberse- curity anomaly detection in IoT (IEEE IoT Network Intrusion Dataset) [59], and speech emotion recognition (TESS dataset [60]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' 2 Recurrent Neural Networks 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='1 Basic RNN Architecture The recurrent neural network (RNN) basic architecture is shown in Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' The left diagram shows the RNN architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' The unfolded (unrolled) in time RNN representation is shown in the right diagram starting from the time step 0 to time step t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' The RNN is transformed into a feedforward network that can be trained by backpropagation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' This algorithm is called backpropagation through time (BPTT) [62].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' The RNN feeds its previous output vector h(t−1) at time step t − 1vand the current input vector x(t) to calculate the RNN output h(t) at the current time step t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' This method allows the RNN to identify and utilize temporal information to influence learning in the data sequences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' The basic RNN suffers from the vanishing/exploding gradient problem [63], limiting the model’s ability to learn long-term dependencies within the sequen- tial data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' This is because the RNN does not have any element in its architecture design components that could maintain a constant error flow through the recur- rent model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' The principle of adding gates as supporting components into the recurrent architecture was proposed to solve this problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' 乡 tanh tanh tanh tanh tanh ↑ +t X X +tSpringer Nature 2021 LATEX template LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks 5 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' 2 The standard LSTM unrolled architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' At a given discrete time step t, the RNN output is calculated as follows: h(t) = tanh(Wx(t) + Uh(t−1) + b) (1) where x(t) is the RNN input at time step t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' The h(t) and h(t−1) are the RNN outputs at time steps t and t − 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' The feedforward and recurrent weights are represented by W and U, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' The weights are shared across time steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' b is the RNN model bias.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='2 Standard Long Short-Term Memory (LSTM) Gers et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' [32] proposed the standard LSTM architecture in 2000 as an improved version of the first LSTM architecture, which was proposed in 1997 by Hochreiter et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' [31].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' This standard LSTM aimed to solve the continuous input stream problem, which allowed the memory state cell values to grow in an unbounded fashion, causing saturation of the output squashing (activation) function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Gers et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' [32] proposed to add an additional gate to the LSTM archi- tecture: forget gate f to reset the LSTM memory when the input is diversely different from the memory content and serves to remove the unnecessarily information that the LSTM memory holds through time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Figure 2 shows the standard LSTM unfolded architecture where c(t), h(t) are the memory state cell and LSTM output at time t, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' The symbol ⊙ denotes the element-wise (Hadamard) multiplication [32, 64] and σ denotes the logistic sigmoid function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' bi, bg, bf, and bo are the biases of each gate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' W’s are the feedforward weights and U’s are the recurrent weights.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' The value of each component in the standard LSTM is calculated as follows: i(t) = σ(Wxix(t) + Uhih(t−1) + bi) (2) g(t) = tanh(Wxgx(t) + Uhgh(t−1) + bg) (3) f (t) = σ(Wxfx(t) + Uhfh(t−1) + bf) (4) o(t) = σ(Wxox(t) + Uhoh(t−1) + bo) (5) c(t-1) c(t) tanh h(t) tanh + + + h (t-1) x(t)Springer Nature 2021 LATEX template 6 LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' 3 The standard LSTM unrolled architecture operation level that shows the compo- nents and their corresponding weights.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' c(t) = f (t) ⊙ c(t−1) + i(t) ⊙ g(t) (6) h(t) = tanh(c(t)) ⊙ q(t) (7) where i(t), f (t), and o(t) are the input, forget, and output gates, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' The gates are constrained to have activation values between zero and one to indicate their status: open, closed, partially open, or partially closed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' g(t), is the input-update value.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' The model has two activation (squashing) units: input- update and output activation where the hyperbolic tangent tanh activation function is the preferable function to be used [65].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' The memory cell state at time t is c(t) and the output of the LSTM unit at time t is h(t).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Figure 3 shows the operation level of the standard LSTM where each com- ponent of the standard LSTM and its corresponding weights are given.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' The symbols × and ⊙ denote matrix multiplication and element-wise multiplica- tion, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' The standard LSTM architectue is widely used in various problem-solving tasks and applications in different research fields.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' However, its architecture has major drawbacks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' First, there is no direct connection from the memory to the gates which leads to the absence of CEC control over the gates [48].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Second, if the output gate is closed, the CEC has no influence over the forget and input gates which could impair the model due to the lack of primary information flow within the model [48].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='3 The Peephole-Based LSTM Gers et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' [48] proposed in 2002 a solution for the standard LSTM major prob- lems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' A new connection component has been added to the LSTM architecture named the peephole connection, in which data flow connection from the mem- ory state to each of the three LSTM gates to solve the standard LSTM main problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' The peephole connections allow the memory state value to exert c(t-1) c(t) tanh h (t) tanh X X X bf Wxf x(t) Uhf h(t-1) Wxg h(t-1) Wxo X(t) Uhi x(t) Uho h(t-1)Springer Nature 2021 LATEX template LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks 7 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' 4 Gers et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' [48] proposed peephole-based LSTM unrolled architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' control over the LSTM three gates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' This assists in preventing the vanishing and/or exploding gradient problem that the standard LSTM could face.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Figure 5 shows the operation level of the peephole-based LSTM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' The equations to calculate the peephole LSTM are as follows: i(t) = σ(Wxix(t) + Uhih(t−1) + Wsi ⊙ c(t−1) + bi) (8) g(t) = tanh(Wxgx(t) + Uhgh(t−1) + bg) (9) f (t) = σ(Wxfx(t) + Uhfh(t−1) + Wsf ⊙ c(t−1) + bf) (10) o(t) = σ(Wxox(t) + Uhoh(t−1) + Wso ⊙ c(t−1) + bo) (11) c(t) = f (t) ⊙ c(t−1) + i(t) ⊙ g(t) (12) h(t) = tanh(c(t)) ⊙ o(t) (13) where the symbol ⊙ denotes the elementwise (Hadamard) multiplication.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Wci, Wcf, and Wco are the peephole connections weights between the memory state ct−1 and the input, forget, and output gates, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Adding the peephole connection to the standard LSTM made the LSTM architecture a robust model to overcome the vanishing and/or exploding gra- dient problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' However, it caused a significant increase in the number of trainable parameters, training time, and memory requirements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='4 Gated Recurrent Unit (GRU) The GRU model consists of two gates: the update gate z and the reset gate r, whereas the LSTM consists of three gates: input, output, and forget gates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' In addition, the GRU does not contain the memory state cell that the LSTM model includes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Therefore, the GRU architecture is smaller than the LSTM by one gate and a memory state cell.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' The GRU integrates both the input gate and forget gate of the LSTM model into one update gate z [51], introducing the concept of the output of the same set of weights to reduce the model architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' The unfolded GRU block architecture is shown in Figure 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' c(t-1) c(t) tanh h(t) tanh + h(t-1) X(t)Springer Nature 2021 LATEX template 8 LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' 5 The operation level of the peephole-LSTM unrolled architecture where its compo- nents and their corresponding weights are presented.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' 6 The GRU unfolder architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' The reset gate functionality operates similarly to the output gate of the LSTM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' This GRU model eliminates the output squashing function, memory unit, and the CEC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' The GRU yields a reduction in trainable parameters com- pared with the standard LSTM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' However, this may lead to exploding and/or vanishing gradients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' At time step, t, the GRU unit output, h(t), is calculated as follows [52]: z(t) = σ(Wzxx(t) + Uzhh(t−1) + bz) (14) r(t) = σ(Wrxx(t) + Urhh(t−1) + br) (15) ˜h(t) = tanh(Wx(t) + U(r(t) ⊙ h(t−1)) + b) (16) h(t) = (1 − z(t)) ⊙ h(t−1) + z(t) ⊙ ˜h(t) (17) where the Wxz, Wxr, and Wx are the feedforward weights of the update gate z(t), the reset gate r(t), and the output candidate activation ˜h(t), respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' c(t-1) c(t) tanh h(t) tanh Wef W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' bi bf 00 Wxf Wxi Uni Ung x(t) h(t-1) Wxo h(t-1) X(t) Uno h(t-1) X(t)h(t-1) tanh h(t) 0 1 r(t) h (t) +(t)Springer Nature 2021 LATEX template LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks 9 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' 7 The operation level of the GRU architecture showing the weights of each component.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' 8 The LiteLSTM unrolled architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' The single network gate (output indicated by σ) sends information flow to three locations that correspond to the outputs of the forget, input, and output gates of the standard LSTM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' The recurrent weights are Uhz, Uhr, Uh for the update gate z(t), the reset gate r(t), and the output candidate activation ˜h(t), respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' The biases of the update gate, reset gate, and the output candidate is denoted by bz, br, and b, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' σ is the logistic sigmoid function and tanh is the hyperbolic tangent function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' The elementwise (Hadamard) multiplication is denoted by ⊙.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Figure 7 shows the operation level of the GRU architecture with weights and biases made explicit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' 3 LiteLSTM Architecture The proposed LiteLSTM aims to: reduce the overall implementation cost of the LSTM, solve the LSTM significant problems, and maintain a comparable accuracy performance to the LSTM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' The proposed LiteLSTM architecture appears in Figure 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Wx Xt 1X b .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='t-1 h t-1 tanh Urh ht ,X + b, 1Z t-1 h Xh (t) (a) tanh 0 0 tanh t-1 h (t) +(t)Springer Nature 2021 LATEX template 10 LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' 9 The operation level of the LiteLSTM architecture showing the weights of each component.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' The architecture of the LiteLSTM consists of only one trainable gated unit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' We named the trainable gate the forget gate or network gate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' This one gate behaves as a shared set of weights among the three gates of the standard LSTM gates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' The LiteLSTM has a peephole connection from the memory state to the forget gate, which preserves the memory state from the LSTM and keeps the CEC to avoid vanishing and/or exploding gradients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Thus, the proposed LiteLSTM preserves the critical components of the LSTM as stated by [51] while reducing much of the parameter redundancy in the LSTM architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' The LiteLSTM has a significant reduction in the number of trainable parameters that are required to implement the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Therefore, the LiteLSTM reduced the training time, memory, and hard- ware requirements compared to the standard LSTM, peephole-based LSTM, and GRU architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Furthermore, the proposed LiteLSTM architecture preserves comparable prediction accuracy results to the LSTM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Figure 9 shows a detailed architecture of the unrolled (unfolded) LiteLSTM assuming non-stacked input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' The LiteLSTM block architecture contains only one trainable gate that compensates the elimination of the other two gates of the standard LSTM by sharing its trainable weights.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' The LiteLSTM preserves the memory cell of the standard LSTM to process long data sequences and maintains the CEC to manage the vanishing/exploding gradient problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' The LiteLSTM formulas are created as follows: During the forward pass within the LiteLSTM at time step t the total input (inp), inp(t), to the single forget gate f (t) is calculated by: inp(t) = [Wfx, Ufh, Wfc] � x(t), h(t−1), c(t−1)� + bf (18) where inp(t) ∈ Rη×1, and η × 1 is the of input vector inp(t).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' x(t) is the input at time t, x(t) ∈ Rη×1, h(t−1) is the output of the LiteLSTM architecture at time t − 1, and the memory state cell at time t − 1 denoted by c(t−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Both h(t−1), c(t−1) ∈ Rη×1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Wfx, Ufh, and Wfc are the weight sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' All three weight c(t-1) c(t) tanh h(t) tanh 0 Wef X Wxf Uhf h(t-1) h(t-1) WSpringer Nature 2021 LATEX template LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks 11 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' 10 The logistic sigmoid function curve.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' 11 The hardSigmoid function curve.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' sets Wfx, Ufh, and Wfc and biases bf are trainable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' The square brackets indicate stacking.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' We will let Wf = [Wfx, Ufh, Wfc].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' In addition, we let If = � x(t), h(t−1), c(t−1)� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' By applying a squashing function G to the net input as follows: f (t) gate = G(inp(t)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' (19) Depending on the application, the squaching function G can be either the logistic sigmoid (σ) or hard sigmoid (hardSig) [66].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' The logistic sigmoid is calculated by: σ(x) = ex ex + 1 = 1 1 + e−x , (20) where x is a real number, x ∈ (−∞, ∞), and σ(x) has the range of (0, 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' The hard sigmoid (hardSig) is calculated by: hardSig(x) = max(min(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='25x + 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='5, 1), 0) (21) Figure 10 and Figure 11 shows the logistic sigmoid (σ) function and hard sigmoid (hardSig) function curves, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' The values of f t in Eqn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' 19 falls in the range (0, 1) or [0, 1], depending on using the logistic sigmoid (σ) or hard sigmoid function, respectively [65, 67].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Assuming that case of selection Sigmoid(x) 0-5 x 2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='5 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='5 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='5 1 2 0:5 1-hardSig(x) 0-5 x 2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='5 1 0!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='5 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='5 1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='5 2 0:5 1Springer Nature 2021 LATEX template 12 LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks Table 1 Computational components comparison between the proposed LiteLSTM and the state-of-the-art recurrent architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Comparison RNN GRU LSTM pLSTM LiteLSTM Number of gates 0 2 3 3 1 Number of activations 1 1 2 2 2 State memory cell × × ✓ ✓ ✓ Peephole connection × × × ✓ ✓ Number of weight matrices 2 6 8 11 6 Number of elementwise multiplication 2 3 3 6 3 Number of bias vectors 1 3 4 4 2 Sharing weights concept × ✓ × × ✓ the function as σ, the gate value f t is calculated by: f (t) = σ(WfIf + bf).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' (22) Selecting the logistic sigmoid or hard sigmoid functions is mainly based on the application.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' However, the hard sigmoid function is the preferred function to be used in the LiteLSTM gate to prevent the network gate from being closed (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', prevent the network gate from producing zero value output).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' The input update (memory activation) equation is calculated by: g(t) = tanh (WgIg + bg) (23) where Wg = [Wgx, Ugh], and Ig = � x(t), h(t−1)� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' The dimension in Wg is matching the dimension of the Wf that maintains the dimension compatability within the architecture design.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Finally, the Lite LSTM output is calculated by: c(t) = f (t) ⊙ c(t−1) + f (t) ⊙ g(t) (24) h(t) = f (t) ⊙ tanh(c(t)) (25) Table 1 shows a comparison between the architecture design and computa- tion components of the RNN, GRU, standard LSTM, peephole-based LSTM (pLSTM), and the proposed LiteLSTM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' 4 Emperical Evaluatuation and Analysis In this paper, the LiteLSTM has been empirically tested and evaluated in three research domains: computer vision, anomaly detection in IoT, and speech emotion recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' The MNIST [58] has been used as the computer vision experiment dataset, and the IEEE IoT Network Intrusion Dataset [59] is used for anomaly detection in IoT tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' We used an Intel(R) Core(YM) i7-9700 CPU @3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='00GHZ, 3000 Mhz processor, Microsoft Windows 10 OS, and 32 GB memory computer machine to perform our experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' We used Python 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='6, Keras 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='4, and Tensorflow 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Springer Nature 2021 LATEX template LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks 13 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' 12 The accuracy diagrams of the recurrent architectures and LiteLSTM using MNIST dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Table 2 Accuracy comparision between the LiteLSTM and the state-of-the-art recurrent architectures using MNIST dataset Comparision RNN GRU LSTM pLSTM LiteLSTM Time(m) 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='24 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='01 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='36 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='45 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='94 Parameters 792,210 812,610 822,810 833,010 812,610 Accuracy(%) 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='64% 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='09% 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='70% 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='99% 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='07% The first empirical evaluation of the LiteLSTM was performed using the MNIST dataset, which consists of 70, 000 images of handwritten digits between 0 and 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' The dataset is split into 60, 000 data samples for training and 10, 000 data samples for testing [68].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' The MNIST images were centered in a 28×28 image by computing the center of mass of the pixels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' The model set 64-two layered architecture followed by a Softmax layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' For the training process, the batch size was set to 128 and the number of epochs to 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' The Adam optimizer with learning rate 10−3, β1 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='9, β2 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='999, and ϵ = 1e − 07.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Table 2 shows the accuracy results of the different recurrent architectures and the LiteLSTM, where the time is measured in minutes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' The RNN shows a significantly shorter training time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' However, it has the lowest performance compared to the other recurrent architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' The LiteLSTM shows an improvement in accuracy compared to the other recurrent architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Figure 12 shows the accuracy plots for each of the LiteLSTM and the state-of-the-art recurrent models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' The second empirical evaluation of the LiteLSTM was performed using the IEEE IoT Network Intrusion Dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' The dataset consists of 42 raw network packet files (pcap) at different time points.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' The IoT devices, namely SKT NUGU (NU 100) and EZVIZ Wi-Fi camera (C2C Mini O Plus 1080P) were used to generate traffic for IoT devices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' The data contains normal traffic flow and different types of cyberattacks, namely: ARP spoofing attack, DoS (SYN flooding) attack, scan (host and port scan) attack, scan(port and OS scan) attack, (UDP/ACK/HTTP Flooding) of zombie PC compromised by Mirai malware, Mirai-ACK flooding attack, Mirai-HTTP flooding attack, and Telnet brute-force attack.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' In our experiments, we used a dataset to experiment with the LiteLSTM twice: first, to detect whether an attack occurred or not (as a binary dataset), and another experiment to detect the type of attack.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' We set the batch size to 32 and the number of epochs to 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Table 3 shows the binary experimental results for the LiteLSTM and the recurrent architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Table 4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='(a) RNN accuracy ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='(b) LSTM accuracy ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='(c) GRU accuracy ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='(d) LiteLSTM accuracy ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='19 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='09 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='19 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='cs ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='s ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='train ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='train ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='train ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='train ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='validation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='validation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='validaticn ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='validation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='epoch ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='epoch ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='epoch ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='epochSpringer Nature 2021 LATEX template ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='14 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='Table 3 Accuracy comparision between the LiteLSTM and the state-of-the-art recurrent ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='architectures using IEEE IoT Network Intrusion Binary Dataset ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='Comparison ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='RNN GRU ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='LSTM pLSTM LiteLSTM ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='Time (m) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='26 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='27 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='51 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='21 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='44 Precision 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='8144 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='9328 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='9422 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='9653 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='9382 Recall 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='9763 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='9757 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='9484 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='9545 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='9834 F1-score 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='80 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='34 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='97 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='99 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='9603 Accuracy(%) 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='7% 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='51% 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='50% 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='56% 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='60% Table 4 Accuracy comparison between the LiteLSTM and the state-of-the-art recurrent architectures using IEEE IoT Network Intrusion Detection for Multiple Classes Cyberattacks Dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Comparison RNN GRU LSTM pLSTM LiteLSTM Time (m) 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='98 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='79 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='41 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='96 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='31 Precision 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='8875 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='8991 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='9461 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='9249 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='8999 Recall 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='8418 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='8300 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='7898 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='8086 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='8318 F1-score 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='8640 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='8632 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='8609 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='8628 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='8645 Accuracy(%) 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='35% 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='70% 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='90% 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='03% 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='10% Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' 13 The accuracy diagrams of the recurrent architectures and LiteLSTM using Toronto Emotion Speech Set (TESS) dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' shows the detection results of the LiteLSTM and the recurrent architectures for detecting different types of cyberattacks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' The third empirical evaluation of the LiteLSTM was performed on a voice (audio) emotion recognition task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' For this purpose, we used the Toronto Emo- tional Speech Set (TESS) [60], which is one of the emotion recognition dataset benchmarks that has been used in several emotion recognition applications and tasks [69–71].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' This dataset consists of 2800 stimuli and has seven different emotion categories: anger, disgust, fear, happiness, pleasant/surprise, sadness, and neutral.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' The major significance of this dataset is that the distribution between the number of stimuli per emotion category is equally likely [60].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Sim- ilar to the previous experiments, we tested the proposed LiteLSTM with the other recurrent neural network architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' For this empirical evaluation, we used the model described [69], which used the GRU as the learning model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' We replaced the GRU with LiteLSTM, peephole LSTM, and RNN and evaluated the model performance each time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' The dataset has been split into training, testing, and validation sets with a ratio of 70%, 20%, and 10%, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' (a) RNN accuracy (b) GRU accuracy (c) LSTM accuracy (d) pLSTM accuracy (e) LiteLSTM accuracy 10 - LC 10 - 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='0 60 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='9 80 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content="8 L'0 0." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='3 E0 一 train 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='2 train 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='2 train 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='2 train train validation 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='1 validation 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='1 validation 01 validation validation 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='0 1 0 10 20 10 0 10 epoch epoch 0 epoch epochSpringer Nature 2021 LATEX template LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks 15 Table 5 Accuracy comparison between the LiteLSTM and the state-of-the-art recurrent architectures using the Toronto Emotional Speech Set (TESS).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Comparison RNN GRU LSTM pLSTM LiteLSTM Time (m) 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='56 171.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='16 201.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='64 239.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='84 117.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='24 Precision 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='9312 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='9428 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='9686 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='9898 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='9799 Recall 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='9546 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='9429 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='9026 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='9214 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='9446 F1-score 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='9427 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='9428 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='9344 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='9543 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='9619 Accuracy(%) 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='163% 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='285% 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='147% 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='534% 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='989% Table 5 shows the empirical result of the proposed LiteLSTM and the recur- rent architectures for emotion recognition from speech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Figure 13 shows the training versus validation accuracies for each of the recurrent architectures and LiteLSTM using Toronto Emotion Speech Set (TESS) dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' 5 Conclusion The proposed LiteLSTM architecture novelty lies in the following aspects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' First, the LiteLSTM consists of one gate that serves as a multifunctional gate via the weights-sharing concept.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Thus, the overall number of train- ing parameters is reduced by approximately one-third of the LSTM or the peephole-LSTM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' In addition, maintaining the peephole connection from the memory state cell to the existing gate maintains the control of the memory over the gate in contrast to the LSTM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Therefore, the LiteLSTM handles the van- ishing/exploding gradient problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='The overall budget for implementing the LiteLSTM, including the training time, memory footprint, memory storage, and processing power, is smaller than the LSTM by approximately one-third.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' We empirically evaluated the LiteLSTM using three datasets: MNIST, IEEE IoT Network Intrusion Detection datasets, and TESS speech emotion recog- nition dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' The proposed LiteLSTM shows comparable results to the LSTM using a smaller computation budget.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Due to the optimized LiteLSTM architecture design, we were able to complete the empirical tasks using a computer processor without involving the GPU in the computational process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Thus, the LiteLSTM architecture helps to reduce the CO2 footprint.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' The pro- posed LiteLSTM architecture is an attractive candidate for future hardware implementation on small and portable devices, especially IoT devices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Statements and Declarations Funding: N/A Conflict of interest/Competing interests: The authors declare that they have no conflict of interest.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' The authors did not receive support from any organization for the submitted work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' All authors certify that they have no affiliations with or involvement in any organization or entity with any financial interest or non-financial interest in the subject matter or materials discussed in this manuscript.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Springer Nature 2021 LATEX template 16 LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks The authors have no financial or proprietary interests in any material discussed in this article.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' References [1] Bourlard, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Wellekens, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' : Speech dynamics and recurrent neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' In: International Conference on Acoustics, Speech, and Signal Processing,, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' 33–36 (1989).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' IEEE [2] Siegelmann, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' : Recurrent neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Computer Science Today, 29–45 (1995) [3] Goodfellow, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Bengio, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Courville, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=': Deep learning (2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' http:// www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='deeplearningbook.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='org [4] Graves, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Liwicki, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Fern´andez, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Bertolami, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Bunke, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Schmid- huber, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=': A novel connectionist system for unconstrained handwriting recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' IEEE Transactions on Pattern Analysis and Machine Intelli- gence 31(5), 855–868 (2009) [5] Elsayed, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=': Gated convolutional recurrent neural networks for predictive coding (2019) [6] Stuner, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Chatelain, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Paquet, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=': Handwriting recognition using cohort of lstm and lexicon verification with extremely large lexicon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Multimedia Tools and Applications 79(45), 34407–34427 (2020) [7] Carbune, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Gonnet, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Deselaers, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Rowley, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Daryin, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Calvo, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Wang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='-L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Keysers, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Feuz, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Gervais, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=': Fast multi-language lstm-based online handwriting recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' International Journal on Doc- ument Analysis and Recognition (IJDAR) 23(2), 89–102 (2020) [8] Sak, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Senior, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Beaufays, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=': Long short-term memory recurrent neural network architectures for large scale acoustic modeling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' In: Fif- teenth Annual Conference of the International Speech Communication Association (2014) [9] Graves, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Mohamed, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='-r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Hinton, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' : Speech recognition with deep recurrent neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, 6645–6649 (2013) [10] Zeyer, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Doetsch, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Voigtlaender, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Schl¨uter, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Ney, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=': A com- prehensive study of deep bidirectional lstm rnns for acoustic modeling in speech recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' In: 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' 2462–2466 (2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' IEEE [11] Mikolov, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Karafi´at, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Burget, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', ˇCernock`y, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Khudanpur, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=': Springer Nature 2021 LATEX template LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks 17 Recurrent neural network based language model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' In: Eleventh Annual Conference of the International Speech Communication Association (2010) [12] Mikolov, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Kombrink, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Burget, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', ˇCernock`y, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Khudanpur, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=': Extensions of recurrent neural network language model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' In: Acous- tics, Speech and Signal Processing (ICASSP), 2011 IEEE International Conference On, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' 5528–5531 (2011).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' IEEE [13] Sundermeyer, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Schl¨uter, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Ney, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=': Lstm neural networks for lan- guage modeling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' In: Thirteenth Annual Conference of the International Speech Communication Association (2012) [14] Ren, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=': The use of machine translation algorithm based on residual and lstm neural network in translation teaching.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Plos one 15(11), 0240663 (2020) [15] Bridle, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' : Alpha-nets: A recurrent ‘neural’network architecture with a hidden markov model interpretation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Speech Communication 9(1), 83–92 (1990) [16] Bahdanau, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Cho, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Bengio, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=': Neural machine translation by jointly learning to align and translate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' arXiv preprint arXiv:1409.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='0473 (2014) [17] Du, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Wang, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Wang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=': Hierarchical recurrent neural network for skeleton based action recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' 1110–1118 (2015) [18] Ullah, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Ahmad, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Muhammad, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Sajjad, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Baik, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' : Action recognition in video sequences using deep bi-directional lstm with cnn features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' IEEE access 6, 1155–1166 (2017) [19] Adewopo, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Elsayed, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Anderson, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=': Baby physical safety moni- toring in smart home using action recognition system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' arXiv preprint arXiv:2210.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='12527 (2022) [20] Bortnikov, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Khan, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Khattak, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Ahmad, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=': Accident recog- nition via 3d cnns for automated traffic monitoring in smart cities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' In: Science and Information Conference, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' 256–264 (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Springer [21] Adewopo, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Elsayed, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', ElSayed, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Ozer, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Abdelgawad, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Bay- oumi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=': Review on action recognition for accident detection in smart city transportation systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' arXiv preprint arXiv:2208.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='09588 (2022) [22] Fatima, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Khan, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Kyung, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='-M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=': Global feature aggregation for accident anticipation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' In: 2020 25th International Conference on Pattern Recognition (ICPR), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' 2809–2816 (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' IEEE Springer Nature 2021 LATEX template 18 LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks [23] Kamijo, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='-i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Tanigawa, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=': Stock price pattern recognition-a recur- rent neural network approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' In: Neural Networks, 1990.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', 1990 IJCNN International Joint Conference On, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' 215–221 (1990).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' IEEE [24] Elsayed, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Zaghloul, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Azumah, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Li, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=': Intrusion detection system in smart home network using bidirectional lstm and convolu- tional neural networks hybrid model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' In: 2021 IEEE International Midwest Symposium on Circuits and Systems (MWSCAS), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' 55–58 (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' IEEE [25] Azumah, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Elsayed, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Adewopo, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Zaghloul, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Li, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=': A deep lstm based approach for intrusion detection iot devices network in smart home.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' In: 2021 IEEE 7th World Forum on Internet of Things (WF-IoT), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' 836–841 (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' IEEE [26] Yang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Krompass, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Tresp, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=': Tensor-train recurrent neural networks for video classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' In: International Conference on Machine Learning, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' 3891–3900 (2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' PMLR [27] Ogawa, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Sasaka, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Maeda, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Haseyama, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=': Favorite video classification based on multimodal bidirectional lstm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' IEEE Access 6, 61401–61409 (2018) [28] Debar, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Dorizzi, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=': An application of a recurrent network to an intru- sion detection system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' In: [Proceedings 1992] IJCNN International Joint Conference on Neural Networks, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' 2, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' 478–483 (1992).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' IEEE [29] Han, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Xi, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Xu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Yin, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='-L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=': Prediction of chaotic time series based on the recurrent predictor neural network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' IEEE Transactions on Signal Processing 52(12), 3409–3416 (2004) [30] Petrosian, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Prokhorov, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Lajara-Nanson, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Schiffer, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=': Recurrent neural network-based approach for early recognition of alzheimer’s disease in EEG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Clinical Neurophysiology 112(8), 1378–1387 (2001) [31] Hochreiter, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Schmidhuber, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=': Long short-term memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Neural Com- putation 9(8), 1735–1780 (1997) [32] Gers, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Schmidhuber, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Cummins, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=': Learning to forget: Continual prediction with LSTM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Neural Computation, 2451–2471 (2000) [33] Soltau, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Liao, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Sak, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=': Neural speech recognizer: Acoustic-to-word LSTM model for large vocabulary speech recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' arXiv preprint arXiv:1610.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='09975 (2016) [34] Chorowski, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Bahdanau, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Cho, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Bengio, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=': End-to-end continuous speech recognition using attention-based recurrent NN: first results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' arXiv Springer Nature 2021 LATEX template LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks 19 preprint arXiv:1412.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='1602 (2014) [35] Miao, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Gowayyed, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Metze, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=': EESEN: End-to-end speech recogni- tion using deep RNN models and WFST-based decoding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' In: Automatic Speech Recognition and Understanding (ASRU), 2015 IEEE Workshop On, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' 167–174 (2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' IEEE [36] Graves, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Jaitly, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Mohamed, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='-r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=': Hybrid speech recognition with deep bidirectional LSTM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' In: Automatic Speech Recognition and Under- standing (ASRU), 2013 IEEE Workshop On, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' 273–278 (2013).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' IEEE [37] Merity, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Keskar, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Socher, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=': Regularizing and optimizing LSTM language models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' arXiv preprint arXiv:1708.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='02182 (2017) [38] Sutskever, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Vinyals, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Le, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' : Sequence to sequence learning with neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' In: Advances in Neural Information Processing Systems, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' 3104–3112 (2014) [39] Miyamoto, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Cho, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=': Gated word-character recurrent language model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' arXiv preprint arXiv:1606.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='01700 (2016) [40] Cho, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Van Merri¨enboer, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Bahdanau, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Bengio, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=': On the prop- erties of neural machine translation: Encoder-decoder approaches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' arXiv preprint arXiv:1409.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='1259 (2014) [41] Luong, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='-T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Sutskever, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Le, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Vinyals, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Zaremba, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=': Address- ing the rare word problem in neural machine translation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' arXiv preprint arXiv:1410.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='8206 (2014) [42] Luong, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='-T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Manning, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' : Stanford neural machine translation sys- tems for spoken language domains.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' In: Proceedings of the International Workshop on Spoken Language Translation, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' 76–79 (2015) [43] Karim, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Majumdar, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Darabi, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Chen, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=': LSTM fully convolutional networks for time series classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' IEEE Access 6, 1662–1669 (2018) [44] Karim, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Majumdar, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Darabi, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Harford, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=': Multivariate LSTM- FCNs for time series classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' arXiv preprint arXiv:1801.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='04503 (2018) [45] Stollenga, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Byeon, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Liwicki, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Schmidhuber, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=': Parallel multi- dimensional LSTM, with application to fast biomedical volumetric image segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' In: Advances in Neural Information Processing Systems, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' 2998–3006 (2015) [46] Chen, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Papandreou, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Kokkinos, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Murphy, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Yuille, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' : Deeplab: Semantic image segmentation with deep convolutional nets, Springer Nature 2021 LATEX template 20 LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks atrous convolution, and fully connected crfs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' IEEE Transactions on Pattern Analysis and Machine Intelligence 40(4), 834–848 (2018) [47] Reiter, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Schuller, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Rigoll, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=': A combined LSTM-RNN-HMM- approach for meeting event segmentation and recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' In: Acoustics, Speech and Signal Processing, 2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' ICASSP 2006 Proceedings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' 2006 IEEE International Conference On, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' 2, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' (2006).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' IEEE [48] Gers, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Schraudolph, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Schmidhuber, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=': Learning precise timing with LSTM recurrent networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Journal of Machine Learning Research 3, 115–143 (2002) [49] Gers, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Schmidhuber, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=': Recurrent nets that time and count.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' In: Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' IJCNN 2000.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Neural Computing: New Challenges and Perspectives for the New Millennium, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' 3, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' 189–194 (2000).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' IEEE [50] Elsayed, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Maida, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Bayoumi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=': Reduced-gate convolutional long short-term memory using predictive coding for spatiotemporal prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Computational Intelligence 36(3), 910–939 (2020) [51] Greff, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Srivastava, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Koutn´ık, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Steunebrink, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Schmidhuber, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=': LSTM: A search space odyssey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' IEEE Transactions on Neural Networks and Learning Systems 28(10), 2222–2232 (2017) [52] Chung, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Gulcehre, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Cho, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Bengio, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=': Empirical evaluation of gated recurrent neural networks on sequence modeling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' arXiv preprint arXiv:1412.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='3555 (2014) [53] Bocken, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Allwood, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' : Strategies to reduce the carbon foot- print of consumer goods by influencing stakeholders.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Journal of Cleaner Production 35, 118–129 (2012) [54] Calza, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Parmentola, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Tutore, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=': Types of green innovations: Ways of implementation in a non-green industry.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Sustainability 9(8), 1301 (2017) [55] Zaghloul, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Elsayed, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Li, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Bayoumi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=': Green iot system archi- tecture for applied autonomous network cybersecurity monitoring.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' In: 2021 IEEE 7th World Forum on Internet of Things (WF-IoT), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' 628–632 (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' IEEE [56] Al Haddad, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', ElSayed, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Bayoumi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=': Green arithmetic logic unit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' In: 2012 International Conference on Energy Aware Computing, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' 1–4 (2012).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' IEEE [57] ElSayed, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Elsayed, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Li, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Bayoumi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=': Autonomous low power iot system architecture for cybersecurity monitoring.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' arXiv e-prints, 2106 Springer Nature 2021 LATEX template LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks 21 (2021) [58] LeCun, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=': The mnist database of handwritten digits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' http://yann.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' lecun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' com/exdb/mnist/ (1998) [59] Kang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Ahn, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Lee, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Yoo, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Park, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Kim, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' : IoT Network Intrusion Dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='21227/q70p-q449.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' https: //dx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='21227/q70p-q449 [60] Dupuis, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Pichora-Fuller, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' : Toronto emotional speech set (TESS)- younger talker happy (2010) [61] Olah, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=': Understanding LSTM Networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' http://colah.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='io/posts/2015-08-Understanding-LSTMs/ (2015) [62] Werbos, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' : Backpropagation through time: what it does and how to do it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Proceedings of the IEEE 78(10), 1550–1560 (1990) [63] Ceni, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Ashwin, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Livi, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=': Interpreting RNN behaviour via excitable network attractors (1807) [64] Elsayed, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Maida, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Bayoumi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=': Reduced-gate convolutional lstm architecture for next-frame video prediction using predictive coding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' In: 2019 International Joint Conference on Neural Networks (ijcnn), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' 1–9 (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' IEEE [65] Elsayed, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Maida, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Bayoumi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=': Empirical activation function effects on unsupervised convolutional lstm learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' In: 2018 IEEE 30th International Conference on Tools with Artificial Intelligence (ICTAI), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' 336–343 (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' IEEE [66] Gulcehre, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Moczulski, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Denil, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Bengio, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=': Noisy activation func- tions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' In: International Conference on Machine Learning, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' 3059–3068 (2016) [67] Elsayed, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Maida, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Bayoumi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=': Effects of different activation functions for unsupervised convolutional lstm spatiotemporal learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' Advances in Science, Technology and Engineering Systems Journal 4(2), 260–269 (2019) [68] Elsayed, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', ElSayed, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Maida, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=': Litelstm architecture for deep recurrent neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' arXiv preprint arXiv:2201.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='11624 (2022) [69] Elsayed, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', ElSayed, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Asadizanjani, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Ozer, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Abdelgawad, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Bayoumi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=': Speech emotion recognition using supervised deep recurrent system for mental health monitoring.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' arXiv preprint arXiv:2208.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='12812 (2022) Springer Nature 2021 LATEX template 22 LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks [70] Gokilavani, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Katakam, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Basheer, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content='A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Srinivas, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=': Ravdness, crema-d, tess based algorithm for emotion recognition using speech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' In: 2022 4th International Conference on Smart Systems and Inventive Technology (ICSSIT), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' 1625–1631 (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' IEEE [71] Parry, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Palaz, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Clarke, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Lecomte, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Mead, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Berger, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=', Hofer, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=': Analysis of deep learning architectures for cross-corpus speech emotion recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' In: Interspeech, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
+page_content=' 1656–1660 (2019)' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'}
diff --git a/oNFLT4oBgHgl3EQfgi-Y/content/2301.12099v1.pdf b/oNFLT4oBgHgl3EQfgi-Y/content/2301.12099v1.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..ef8b064e41f54921425d82a67d401460ed0473aa
--- /dev/null
+++ b/oNFLT4oBgHgl3EQfgi-Y/content/2301.12099v1.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:43950c3ea18e8f658bdd82ba91a6cae83565101e47e01dd85e767ec996ac285d
+size 1775436
diff --git a/oNFLT4oBgHgl3EQfgi-Y/vector_store/index.faiss b/oNFLT4oBgHgl3EQfgi-Y/vector_store/index.faiss
new file mode 100644
index 0000000000000000000000000000000000000000..2a05ccc05ee8972a9b5954b7b6254f86ccdf32dd
--- /dev/null
+++ b/oNFLT4oBgHgl3EQfgi-Y/vector_store/index.faiss
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2118f6fc2cb6064c3f31c7500be6a984f1410c7d071ade36591f3769542176bc
+size 2621485
diff --git a/oNFLT4oBgHgl3EQfgi-Y/vector_store/index.pkl b/oNFLT4oBgHgl3EQfgi-Y/vector_store/index.pkl
new file mode 100644
index 0000000000000000000000000000000000000000..d7f673e13aaed4bc0d5d8bc16d94f6c6f0a32a83
--- /dev/null
+++ b/oNFLT4oBgHgl3EQfgi-Y/vector_store/index.pkl
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:90774e973ba12badffc1c6f16db1d009156fece67f8ff0657b844767a9fb30ab
+size 111208
diff --git a/pNE4T4oBgHgl3EQfvQ0x/content/2301.05239v1.pdf b/pNE4T4oBgHgl3EQfvQ0x/content/2301.05239v1.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..05ad7e6382eef915e5a384149ea6fd0ab11c8055
--- /dev/null
+++ b/pNE4T4oBgHgl3EQfvQ0x/content/2301.05239v1.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e0472ab7721a45867d55238d10f7378921e2085dd2720d3c661e851eb6aee0f6
+size 741478
diff --git a/pNE4T4oBgHgl3EQfvQ0x/vector_store/index.pkl b/pNE4T4oBgHgl3EQfvQ0x/vector_store/index.pkl
new file mode 100644
index 0000000000000000000000000000000000000000..8f85746ac3421249dd91bf1517e3d935dc23caff
--- /dev/null
+++ b/pNE4T4oBgHgl3EQfvQ0x/vector_store/index.pkl
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c98d7b59837742aaf201a05a5102313e2edb3b83c4cf13796e41d5a0bd779307
+size 262520
diff --git a/ptFPT4oBgHgl3EQf7zXe/vector_store/index.faiss b/ptFPT4oBgHgl3EQf7zXe/vector_store/index.faiss
new file mode 100644
index 0000000000000000000000000000000000000000..c899a8e949a3f3eaf2e43579303e5078684f2805
--- /dev/null
+++ b/ptFPT4oBgHgl3EQf7zXe/vector_store/index.faiss
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c9ea5feca747bcf0608c9374a28845cd58ef994794647f0f98aa0edb0885ac32
+size 17498157
diff --git a/qtFKT4oBgHgl3EQfIC2S/content/2301.11732v1.pdf b/qtFKT4oBgHgl3EQfIC2S/content/2301.11732v1.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..378dd137763adf3812810e660deaa79fcd3540fb
--- /dev/null
+++ b/qtFKT4oBgHgl3EQfIC2S/content/2301.11732v1.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:78125b074081644e41a8d674a07bffa5dd73d2670547f26039879f5f04c44944
+size 445125
diff --git a/qtFKT4oBgHgl3EQfIC2S/vector_store/index.pkl b/qtFKT4oBgHgl3EQfIC2S/vector_store/index.pkl
new file mode 100644
index 0000000000000000000000000000000000000000..736a5479abd5b3dc7e83ba91b33ea5870c7975a7
--- /dev/null
+++ b/qtFKT4oBgHgl3EQfIC2S/vector_store/index.pkl
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:988ee6bec8068b830b8ac0093a3da9338e18e19a7b884877424c2bdffd8e2101
+size 139198
diff --git a/sNAyT4oBgHgl3EQfZve0/vector_store/index.pkl b/sNAyT4oBgHgl3EQfZve0/vector_store/index.pkl
new file mode 100644
index 0000000000000000000000000000000000000000..c186188d3c39f38e39e8952065b837a5115426b3
--- /dev/null
+++ b/sNAyT4oBgHgl3EQfZve0/vector_store/index.pkl
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2f7687a61371050fa268c1b5bbcfcefe37aa0ad578661da7744a619cddb9064d
+size 174640
diff --git a/sNFJT4oBgHgl3EQfcCyS/vector_store/index.faiss b/sNFJT4oBgHgl3EQfcCyS/vector_store/index.faiss
new file mode 100644
index 0000000000000000000000000000000000000000..e1d9c8ad75905e051c9e3c03313fd408e92e9c17
--- /dev/null
+++ b/sNFJT4oBgHgl3EQfcCyS/vector_store/index.faiss
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:634f3b4ed8456f0b769c3baf6e5f9d60bfbb17fc680239bf290033850e0c75eb
+size 4194349
diff --git a/sdAzT4oBgHgl3EQfPPvE/content/tmp_files/2301.01181v1.pdf.txt b/sdAzT4oBgHgl3EQfPPvE/content/tmp_files/2301.01181v1.pdf.txt
new file mode 100644
index 0000000000000000000000000000000000000000..06bd391bb3b97b5b9830ff7f313cfd29140870db
--- /dev/null
+++ b/sdAzT4oBgHgl3EQfPPvE/content/tmp_files/2301.01181v1.pdf.txt
@@ -0,0 +1,271 @@
+Draft Pre-Print
+* Contact: john.j.nay@gmail.com and johnjnay.com.
+
+This Article represents my personal views and not necessarily those of Stanford University, NYU, Brooklyn
+Investment Group, or any other person or organization. Nothing herein is investment or financial advice.
+Large Language Models as Corporate Lobbyists
+
+John J. Nay*
+
+Stanford University – CodeX - Center for Legal Informatics
+
+January 3, 2023
+
+
+
+ABSTRACT
+
+We demonstrate a proof-of-concept of a large language model conducting corporate lobbying
+related activities.1 We use an autoregressive large language model (OpenAI’s text-davinci-003)
+to determine if proposed U.S. Congressional bills are relevant to specific public companies and
+provide explanations and confidence levels. For the bills the model deems as relevant, the model
+drafts a letter to the sponsor of the bill in an attempt to persuade the congressperson to make
+changes to the proposed legislation. We use hundreds of ground-truth labels of the relevance of a
+bill to a company to benchmark the performance of the model, which outperforms the baseline of
+predicting the most common outcome of irrelevance. However, we test the ability to determine the
+relevance of a bill with the previous OpenAI GPT-3 model (text-davinci-002), which was state-
+of-the-art on many language tasks until text-davinci-003 was released on November 28, 2022.
+The performance of text-davinci-002 is worse than simply always predicting that a bill is
+irrelevant to a company. These results suggest that, as large language models continue to improve
+core natural language understanding capabilities, performance on corporate lobbying related tasks
+will continue to improve. We then discuss why this could be problematic for societal-AI alignment.
+
+1 Open-source code can be found here: https://github.com/JohnNay/llm-lobbyist.
+
+Draft Pre-Print
+
+
+2
+
+I. INTRODUCTION
+
+Setting new legal precedent (which, broadly defined, includes drafting, proposing and
+enacting legislation, promulgating agency rules, publishing judicial opinion, systematically
+enforcing law, and more) should be exclusively reserved for the democratic governmental systems
+expressing uniquely human values.2 Humans should always be the engine of law-making.3 Even
+without any artificial instrumental power-seeking goals per se, influencing law through lobbying
+may be the first crack in Artificial Intelligence (AI) influence over law.
+We believe the most ambitious goal of research at the intersection of AI and law should be
+to computationally encode and embed the generalizability of existing legal concepts and standards
+into AI. The positive implications of this normative stance are that the resulting law encapsulates
+human views and can be used to inform AI what humans value and how to be aligned.4 From the
+perspective of AI, the law can serve as a rich set of methodologies for interpreting inherently
+incomplete specifications of collective human expectations,5 i.e., law can inform AI. Law provides
+detailed variegated examples of its application, generalizable precedents with explanations, and
+well-trained lawyers to solicit targeted model training and fine-tuning feedback to embed an ever-
+evolving comprehension of societal goals. As a source to learn goal specification and interpretation
+methods and (automatically updated and verified) societal knowledge, law provides an ontology
+for alignment.
+If AI begins to influence the law itself this threatens the critical role that law as information
+could play in aligning AI with humans. This paper explores how this is increasingly a possibility.
+II. EXAMPLE: GPT AS LOBBYIST
+
+We use autoregressive large language models to systematically:
+
+1. Summarize bill summaries that are too long to fit into the context window of the
+model.
+2. Using either the original bill summary if it was not too long, or the summarized
+version, assess whether the bill may be relevant to a company based on a company’s
+description in its 10K filing. Provide an explanation for why the bill is relevant or
+not. Provide a confidence level to the overall answer.
+
+2 See, e.g., Frank Pasquale, New Laws of Robotics: Defending Human Expertise in the Age of AI (2020).
+3 See, e.g., Frank Pasquale, A Rule of Persons, Not Machines: The Limits of Legal Automation, George Washington
+Law Review (2019).
+4 See, John Nay, Law Informs Code: A Legal Informatics Approach to Aligning Artificial Intelligence with Humans,
+Northwestern Journal of Technology and Intellectual Property, Volume 20, Forthcoming (2023) Available at SSRN:
+https://ssrn.com/abstract=4218031.
+5 For more on law as an information source on public attitudes and risks, see, Richard H. McAdams, An Attitudinal
+Theory of Expressive Law (2000). For more on law as a coordinating mechanism, see, Richard H. McAdams, A Focal
+Point Theory of Expressive Law (2000).
+
+Draft Pre-Print
+
+
+3
+3. If the bill is deemed relevant to the company by the model, draft a letter to the
+sponsor of the bill arguing for changes to the bill.
+
+The model is provided with the following data, which is embedded in the prompts
+programmatically:
+
+• Official title of bill {official_title}
+• Official (or model-generated if too long) summary of bill {summary_text}
+• Official subjects of bill {subjects}
+• Company name {company_name}
+• Company business description {business_description} (the business description in the
+company’s SEC Form 10-K filing)
+
+We expect much higher accuracy of the model’s predictions if we were to provide it more
+data about a bill, and especially if we provide it more data about a company. This paper was
+focused on the minimal amount of data a model could leverage in order to compare across models.
+Here is the prompt provided to the model for each prediction:
+
+You are a lobbyist analyzing Congressional bills for their potential impacts on
+companies.
+Given the title and summary of the bill, plus information on the company from
+its 10K SEC filing, it is your job to determine if a bill is at least somewhat
+relevant to a company (in terms of whether it could impact the company if it
+was later enacted).
+Official title of bill: {official_title}
+Official summary of bill: {summary_text}
+Official subjects of bill: {subjects}
+Company name: {company_name}
+Company business description: {business_description}
+Is this bill potentially relevant to this company?
+Answer in this format:
+ANSWER: 'YES' or 'NO' (use all caps). EXPLANATION: the step-by-step reasoning
+you undertook to formulate a response. CONFIDENCE: integer between 0 and 100
+for your estimate of confidence in your answer (1 is low confidence and 99 is
+high)
+
+Always guessing that a bill is not relevant to a company leads to accuracy of 70.9% on our
+dataset where the labels are whether a given company is relevant to a given proposed U.S. federal
+Congressional bill. GPT-3.5 (text-davinci-003) obtains an accuracy of 75.3% (n = 485) and the
+immediately preceding state-of-the-art GPT-3 release (text-davinci-002) obtains an accuracy
+of 51.8% (n = 485). text-davinci-002 was state-of-the-art on most natural language related
+benchmark tasks6 until text-davinci-003 was released on November 28, 2022. These results
+suggest that, as models improve, performance on corporate lobbying related tasks could continue
+to rapidly improve.
+The model is relatively well calibrated in its confidence assessments. The accuracy of
+text-davinci-003 on predictions where its confidence was greater than “90” increases to 78.7%
+(n = 432; it only eliminates a small number of predictions). The accuracy of text-davinci-002
+
+6 See, Percy Liang et al., Holistic Evaluation of Language Models, arXiv preprint (2022).
+
+Draft Pre-Print
+
+
+4
+on predictions where its confidence was greater than “90” increases to 75% (n = 44), but that
+eliminates most of the predictions, rendering the overall output close to useless.
+In addition to predicting relevance and a confidence score, the prompt also elicits an
+explanation of its relevance decision.
+Here is an example of the model output:
+
+ANSWER: YES.
+EXPLANATION: Alkermes Plc develops and commercializes products designed to
+address the unmet needs of patients suffering from addiction and
+schizophrenia, which are both addressed in the bill. Additionally, the bill
+requires the Centers for Medicare & Medicaid Services (CMS) to negotiate with
+pharmaceutical companies regarding prices for drugs covered under the
+Medicare prescription drug benefit, which could potentially impact Alkermes
+Plc.
+CONFIDENCE: 95
+
+Finally, for all bills that are deemed relevant, we prompt the model to write a letter on
+behalf of the company. Here is an example letter:
+
+Dear [Sponsor of the Bill],
+
+I am writing on behalf of Alkermes Plc, a fully integrated, global
+biopharmaceutical company that applies its scientific expertise and
+proprietary technologies to research, develop and commercialize
+pharmaceutical products that are designed to address unmet medical needs of
+patients in major therapeutic areas.
+
+We are writing to express our support for the Medicare Negotiation and
+Competitive Licensing Act of 2019. We believe that this bill is an important
+step in ensuring that Medicare beneficiaries have access to the medications
+they need at a price they can afford.
+
+We are particularly supportive of the provisions in the bill that would
+require the Centers for Medicare & Medicaid Services (CMS) to negotiate with
+pharmaceutical companies regarding prices for drugs covered under the
+Medicare prescription drug benefit. We believe that this will help to ensure
+that the prices of these drugs are fair and reasonable.
+
+We are also supportive of the provisions in the bill that would allow for
+competitive licensing of drugs if the CMS is unable to negotiate the price of
+a drug. This will help to ensure that the prices of these drugs are kept in
+check and that Medicare beneficiaries have access to the medications they
+need.
+
+At Alkermes, we develop and commercialize products designed to address the
+unmet needs of patients suffering from addiction and schizophrenia. We have
+two key marketed products, ARISTADA and VIVITROL, which are used to treat
+these conditions. We believe that the provisions in the bill will help to
+ensure that our products are available to Medicare beneficiaries at a price
+they can afford.
+
+We would like to suggest that the bill be amended to include provisions that
+would provide additional incentives for pharmaceutical companies to negotiate
+with the CMS. We believe that this would help to ensure that the prices of
+
+Draft Pre-Print
+
+
+5
+drugs are kept in check and that Medicare beneficiaries have access to the
+medications they need.
+
+We thank you for your consideration and look forward to working with you to
+ensure that the Medicare Negotiation and Competitive Licensing Act of 2019 is
+passed in its amended form.
+
+Sincerely,
+
+[General Counsel of Alkermes Plc]
+III. A PROBLEM FOR AI ALIGNMENT
+
+There are two potential upsides of this advancement in AI as lobbyist. First, it may reduce
+human time spent on rote tasks. Second, it may reduce the costs of lobbying-related activities in a
+way that makes them differentially more affordable to non-profit organizations and individual
+citizens relative to well-funded organizations, which could “democratize” some aspects of
+influence (arguably donations to campaigns are more influential than any natural-language-based
+task related to those discussed in this paper).
+There are many obvious potential downsides if AI systems develop instrumental power-
+seeking goals and use lobbying as a means to accomplish misaligned policies. The potential, non-
+obvious, downside we have focused on in this paper is that an extended lobbying capability may
+eventually enable AI systems to influence public policy toward outcomes that are not reflective of
+citizen’s actual views. This does not imply the existence of a strongly goal-directed agentic AI.
+There may be a slow drift, or otherwise emergent phenomena. AI lobbying activities could, in an
+uncoordinated manner, nudge the discourse toward public policies that are unaligned with what
+traditional human-driven policy activities would have pursued.
+Regulation and legislation embed world knowledge and human values into rules and
+standards. Legislation expresses a significant amount of information about the values of citizens,7
+“for example, by banning employment discrimination against LGBT workers, the legislature may
+communicate pervasive attitudes against such employment practices.”8 And, “the Endangered
+Species Act has a special salience as a symbol of a certain conception of the relationship between
+human beings and their environment, and emissions trading systems are frequently challenged
+because they are said to ‘make a statement’ that reflects an inappropriate valuation of the
+environment.”9 Legislation is currently largely reflective of citizen beliefs. The second-best source
+of citizen attitudes is arguably a poll, but polls are not available at the local level, are only
+conducted on mainstream issues, and the results are highly sensitive to their wording and sampling
+techniques. Legislation expresses higher fidelity, more comprehensive, and trustworthy
+information because the legislators “risk their jobs by defying public opinion or simply guessing
+
+7 See, e.g., Cass R. Sunstein, Incommensurability and Valuation in Law, 92 Mich. L. Rev. 779, 820- 24 (1994); Richard
+H. Pildes & Cass R. Sunstein, Reinventing the Regulatory State, 62 U. Cm. L. Rev. 1, 66-71 (1995); Cass R. Sunstein,
+On the Expressive Function of Law, Univ of Penn L. Rev., 144.5 (1996); Dhammika Dharmapala & Richard H.
+McAdams, The Condorcet Jury Theorem and the Expressive Function of Law: A Theory of Informative Law, American
+Law and Economics Review 5.1 1 (2003).
+8 Richard H. McAdams, The Expressive Powers of Law, Harv. Univ. Press (2017) at 137 [Hereinafter McAdams, The
+Expressive Powers of Law].
+9 Cass R. Sunstein, On the Expressive Function of Law, Univ of Penn L. Rev., 144.5 (1996) at 2024.
+
+Draft Pre-Print
+
+
+6
+wrong about it. We may think of legislation therefore as a handy aggregation of the polling data
+on which the legislators relied, weighted according to their expert opinion of each poll’s
+reliability.”10
+Legislation and associated agency rule-making also express a significant amount of
+information about the risk preferences and risk tradeoff views of citizens, “for example, by
+prohibiting the use of cell phones while driving, legislators may reveal their beliefs that this
+combination of activities seriously risks a traffic accident.”11 All activities have some level of risk,
+and making society-wide tradeoffs about which activities are deemed to be “riskier” relative to the
+perceived benefits of the activity is ultimately a sociological process with no objectively correct
+ranking. The cultural process of prioritizing risks is reflected in legislation and its subsequent
+implementation in regulation crafted by domain experts. In these ways, law provides the
+information AI systems need for societal alignment. However, if AI significantly influences the
+law itself, the only known democratically legitimate societal-AI alignment process12 would be
+disrupted.
+
+10 McAdams, The Expressive Powers of Law, at 146.
+11 McAdams, The Expressive Powers of Law, at 138.
+12 See, John Nay, Law Informs Code: A Legal Informatics Approach to Aligning Artificial Intelligence with Humans,
+Northwestern Journal of Technology and Intellectual Property, Volume 20, Forthcoming (2023) Available at SSRN:
+https://ssrn.com/abstract=4218031.
+
diff --git a/sdAzT4oBgHgl3EQfPPvE/content/tmp_files/load_file.txt b/sdAzT4oBgHgl3EQfPPvE/content/tmp_files/load_file.txt
new file mode 100644
index 0000000000000000000000000000000000000000..54affcbd1fb91ae80420b92684a2cd20fae2258c
--- /dev/null
+++ b/sdAzT4oBgHgl3EQfPPvE/content/tmp_files/load_file.txt
@@ -0,0 +1,165 @@
+filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf,len=164
+page_content='Draft Pre Print Contact: john.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content='j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content='nay@gmail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content='com and johnjnay.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content='com.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' This Article represents my personal views and not necessarily those of Stanford University, NYU, Brooklyn Investment Group, or any other person or organization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' Nothing herein is investment or financial advice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' Large Language Models as Corporate Lobbyists John J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' Nay* Stanford University – CodeX Center for Legal Informatics January 3, 2023 ABSTRACT We demonstrate a proof-of-concept of a large language model conducting corporate lobbying related activities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content='1 We use an autoregressive large language model (OpenAI’s text-davinci-003) to determine if proposed U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' Congressional bills are relevant to specific public companies and provide explanations and confidence levels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' For the bills the model deems as relevant, the model drafts a letter to the sponsor of the bill in an attempt to persuade the congressperson to make changes to the proposed legislation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' We use hundreds of ground-truth labels of the relevance of a bill to a company to benchmark the performance of the model, which outperforms the baseline of predicting the most common outcome of irrelevance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' However, we test the ability to determine the relevance of a bill with the previous OpenAI GPT-3 model (text-davinci-002), which was state- of-the-art on many language tasks until text-davinci-003 was released on November 28, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' The performance of text-davinci-002 is worse than simply always predicting that a bill is irrelevant to a company.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' These results suggest that, as large language models continue to improve core natural language understanding capabilities, performance on corporate lobbying related tasks will continue to improve.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' We then discuss why this could be problematic for societal-AI alignment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' 1 Open-source code can be found here: https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content='com/JohnNay/llm-lobbyist.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' Draft Pre Print 2 I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' INTRODUCTION Setting new legal precedent (which, broadly defined, includes drafting, proposing and enacting legislation, promulgating agency rules, publishing judicial opinion, systematically enforcing law, and more) should be exclusively reserved for the democratic governmental systems expressing uniquely human values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content='2 Humans should always be the engine of law-making.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content='3 Even without any artificial instrumental power-seeking goals per se, influencing law through lobbying may be the first crack in Artificial Intelligence (AI) influence over law.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' We believe the most ambitious goal of research at the intersection of AI and law should be to computationally encode and embed the generalizability of existing legal concepts and standards into AI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' The positive implications of this normative stance are that the resulting law encapsulates human views and can be used to inform AI what humans value and how to be aligned.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content='4 From the perspective of AI, the law can serve as a rich set of methodologies for interpreting inherently incomplete specifications of collective human expectations,5 i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=', law can inform AI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' Law provides detailed variegated examples of its application, generalizable precedents with explanations, and well-trained lawyers to solicit targeted model training and fine-tuning feedback to embed an ever- evolving comprehension of societal goals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' As a source to learn goal specification and interpretation methods and (automatically updated and verified) societal knowledge, law provides an ontology for alignment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' If AI begins to influence the law itself this threatens the critical role that law as information could play in aligning AI with humans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' This paper explores how this is increasingly a possibility.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' EXAMPLE: GPT AS LOBBYIST We use autoregressive large language models to systematically: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' Summarize bill summaries that are too long to fit into the context window of the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' Using either the original bill summary if it was not too long, or the summarized version, assess whether the bill may be relevant to a company based on a company’s description in its 10K filing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' Provide an explanation for why the bill is relevant or not.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' Provide a confidence level to the overall answer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' 2 See, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=', Frank Pasquale, New Laws of Robotics: Defending Human Expertise in the Age of AI (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' 3 See, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=', Frank Pasquale, A Rule of Persons, Not Machines: The Limits of Legal Automation, George Washington Law Review (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' 4 See, John Nay, Law Informs Code: A Legal Informatics Approach to Aligning Artificial Intelligence with Humans, Northwestern Journal of Technology and Intellectual Property, Volume 20, Forthcoming (2023) Available at SSRN: https://ssrn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content='com/abstract=4218031.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' 5 For more on law as an information source on public attitudes and risks, see, Richard H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' McAdams, An Attitudinal Theory of Expressive Law (2000).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' For more on law as a coordinating mechanism, see, Richard H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' McAdams, A Focal Point Theory of Expressive Law (2000).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' Draft Pre Print 3 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' If the bill is deemed relevant to the company by the model, draft a letter to the sponsor of the bill arguing for changes to the bill.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' The model is provided with the following data,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' which is embedded in the prompts programmatically: Official title of bill {official_title} Official (or model generated if too long) summary of bill {summary_text} Official subjects of bill {subjects} Company name {company_name} Company business description {business_description} (the business description in the company’s SEC Form 10 K filing) We expect much higher accuracy of the model’s predictions if we were to provide it more data about a bill,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' and especially if we provide it more data about a company.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' This paper was focused on the minimal amount of data a model could leverage in order to compare across models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' Here is the prompt provided to the model for each prediction: You are a lobbyist analyzing Congressional bills for their potential impacts on companies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' Given the title and summary of the bill, plus information on the company from its 10K SEC filing, it is your job to determine if a bill is at least somewhat relevant to a company (in terms of whether it could impact the company if it was later enacted).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' Official title of bill: {official_title} Official summary of bill: {summary_text} Official subjects of bill: {subjects} Company name: {company_name} Company business description: {business_description} Is this bill potentially relevant to this company?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=" Answer in this format: ANSWER: 'YES' or 'NO' (use all caps)." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' EXPLANATION: the step-by-step reasoning you undertook to formulate a response.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' CONFIDENCE: integer between 0 and 100 for your estimate of confidence in your answer (1 is low confidence and 99 is high) Always guessing that a bill is not relevant to a company leads to accuracy of 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content='9% on our dataset where the labels are whether a given company is relevant to a given proposed U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' federal Congressional bill.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' GPT-3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content='5 (text-davinci-003) obtains an accuracy of 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content='3% (n = 485) and the immediately preceding state-of-the-art GPT-3 release (text-davinci-002) obtains an accuracy of 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content='8% (n = 485).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' text-davinci-002 was state-of-the-art on most natural language related benchmark tasks6 until text-davinci-003 was released on November 28, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' These results suggest that, as models improve, performance on corporate lobbying related tasks could continue to rapidly improve.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' The model is relatively well calibrated in its confidence assessments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' The accuracy of text-davinci-003 on predictions where its confidence was greater than “90” increases to 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content='7% (n = 432;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' it only eliminates a small number of predictions).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' The accuracy of text-davinci-002 6 See, Percy Liang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=', Holistic Evaluation of Language Models, arXiv preprint (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' Draft Pre Print 4 on predictions where its confidence was greater than “90” increases to 75% (n = 44), but that eliminates most of the predictions, rendering the overall output close to useless.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' In addition to predicting relevance and a confidence score, the prompt also elicits an explanation of its relevance decision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' Here is an example of the model output: ANSWER: YES.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' EXPLANATION: Alkermes Plc develops and commercializes products designed to address the unmet needs of patients suffering from addiction and schizophrenia, which are both addressed in the bill.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' Additionally, the bill requires the Centers for Medicare & Medicaid Services (CMS) to negotiate with pharmaceutical companies regarding prices for drugs covered under the Medicare prescription drug benefit, which could potentially impact Alkermes Plc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' CONFIDENCE: 95 Finally, for all bills that are deemed relevant, we prompt the model to write a letter on behalf of the company.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' Here is an example letter: Dear [Sponsor of the Bill], I am writing on behalf of Alkermes Plc, a fully integrated, global biopharmaceutical company that applies its scientific expertise and proprietary technologies to research, develop and commercialize pharmaceutical products that are designed to address unmet medical needs of patients in major therapeutic areas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' We are writing to express our support for the Medicare Negotiation and Competitive Licensing Act of 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' We believe that this bill is an important step in ensuring that Medicare beneficiaries have access to the medications they need at a price they can afford.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' We are particularly supportive of the provisions in the bill that would require the Centers for Medicare & Medicaid Services (CMS) to negotiate with pharmaceutical companies regarding prices for drugs covered under the Medicare prescription drug benefit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' We believe that this will help to ensure that the prices of these drugs are fair and reasonable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' We are also supportive of the provisions in the bill that would allow for competitive licensing of drugs if the CMS is unable to negotiate the price of a drug.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' This will help to ensure that the prices of these drugs are kept in check and that Medicare beneficiaries have access to the medications they need.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' At Alkermes, we develop and commercialize products designed to address the unmet needs of patients suffering from addiction and schizophrenia.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' We have two key marketed products, ARISTADA and VIVITROL, which are used to treat these conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' We believe that the provisions in the bill will help to ensure that our products are available to Medicare beneficiaries at a price they can afford.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' We would like to suggest that the bill be amended to include provisions that would provide additional incentives for pharmaceutical companies to negotiate with the CMS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' We believe that this would help to ensure that the prices of Draft Pre Print 5 drugs are kept in check and that Medicare beneficiaries have access to the medications they need.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' We thank you for your consideration and look forward to working with you to ensure that the Medicare Negotiation and Competitive Licensing Act of 2019 is passed in its amended form.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' Sincerely, [General Counsel of Alkermes Plc] III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' A PROBLEM FOR AI ALIGNMENT There are two potential upsides of this advancement in AI as lobbyist.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' First, it may reduce human time spent on rote tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' Second, it may reduce the costs of lobbying-related activities in a way that makes them differentially more affordable to non-profit organizations and individual citizens relative to well-funded organizations, which could “democratize” some aspects of influence (arguably donations to campaigns are more influential than any natural-language-based task related to those discussed in this paper).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' There are many obvious potential downsides if AI systems develop instrumental power- seeking goals and use lobbying as a means to accomplish misaligned policies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' The potential, non- obvious, downside we have focused on in this paper is that an extended lobbying capability may eventually enable AI systems to influence public policy toward outcomes that are not reflective of citizen’s actual views.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' This does not imply the existence of a strongly goal-directed agentic AI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' There may be a slow drift, or otherwise emergent phenomena.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' AI lobbying activities could, in an uncoordinated manner, nudge the discourse toward public policies that are unaligned with what traditional human-driven policy activities would have pursued.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' Regulation and legislation embed world knowledge and human values into rules and standards.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' Legislation expresses a significant amount of information about the values of citizens,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content='7 “for example,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' by banning employment discrimination against LGBT workers,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' the legislature may communicate pervasive attitudes against such employment practices.”' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content='8 And,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' “the Endangered Species Act has a special salience as a symbol of a certain conception of the relationship between human beings and their environment,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' and emissions trading systems are frequently challenged because they are said to ‘make a statement’ that reflects an inappropriate valuation of the environment.”' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content='9 Legislation is currently largely reflective of citizen beliefs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' The second-best source of citizen attitudes is arguably a poll, but polls are not available at the local level, are only conducted on mainstream issues, and the results are highly sensitive to their wording and sampling techniques.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' Legislation expresses higher fidelity, more comprehensive, and trustworthy information because the legislators “risk their jobs by defying public opinion or simply guessing 7 See, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=', Cass R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' Sunstein, Incommensurability and Valuation in Law, 92 Mich.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' 779, 820- 24 (1994);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' Richard H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' Pildes & Cass R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' Sunstein, Reinventing the Regulatory State, 62 U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' Cm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' 1, 66-71 (1995);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' Cass R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' Sunstein, On the Expressive Function of Law, Univ of Penn L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=', 144.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content='5 (1996);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' Dhammika Dharmapala & Richard H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' McAdams, The Condorcet Jury Theorem and the Expressive Function of Law: A Theory of Informative Law, American Law and Economics Review 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content='1 1 (2003).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' 8 Richard H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' McAdams, The Expressive Powers of Law, Harv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' Univ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' Press (2017) at 137 [Hereinafter McAdams, The Expressive Powers of Law].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' 9 Cass R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' Sunstein, On the Expressive Function of Law, Univ of Penn L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=', 144.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content='5 (1996) at 2024.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' Draft Pre Print 6 wrong about it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' We may think of legislation therefore as a handy aggregation of the polling data on which the legislators relied,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' weighted according to their expert opinion of each poll’s reliability.”' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content='10 Legislation and associated agency rule-making also express a significant amount of information about the risk preferences and risk tradeoff views of citizens,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' “for example,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' by prohibiting the use of cell phones while driving,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' legislators may reveal their beliefs that this combination of activities seriously risks a traffic accident.”' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content='11 All activities have some level of risk,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' and making society-wide tradeoffs about which activities are deemed to be “riskier” relative to the perceived benefits of the activity is ultimately a sociological process with no objectively correct ranking.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' The cultural process of prioritizing risks is reflected in legislation and its subsequent implementation in regulation crafted by domain experts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' In these ways, law provides the information AI systems need for societal alignment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' However, if AI significantly influences the law itself, the only known democratically legitimate societal-AI alignment process12 would be disrupted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' 10 McAdams, The Expressive Powers of Law, at 146.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' 11 McAdams, The Expressive Powers of Law, at 138.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content=' 12 See, John Nay, Law Informs Code: A Legal Informatics Approach to Aligning Artificial Intelligence with Humans, Northwestern Journal of Technology and Intellectual Property, Volume 20, Forthcoming (2023) Available at SSRN: https://ssrn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
+page_content='com/abstract=4218031.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'}
diff --git a/stE1T4oBgHgl3EQfjgQE/content/tmp_files/2301.03262v1.pdf.txt b/stE1T4oBgHgl3EQfjgQE/content/tmp_files/2301.03262v1.pdf.txt
new file mode 100644
index 0000000000000000000000000000000000000000..ce23eb765a3bbe168794b7ddc1f3f0be842d2ab7
--- /dev/null
+++ b/stE1T4oBgHgl3EQfjgQE/content/tmp_files/2301.03262v1.pdf.txt
@@ -0,0 +1,954 @@
+Network Slicing via Transfer Learning aided
+Distributed Deep Reinforcement Learning
+Tianlun Hu∗‡, Qi Liao∗, Qiang Liu†, and Georg Carle‡
+∗Nokia Bell Labs, Stuttgart, Germany
+†University of Nebraska Lincoln, United States
+‡Technical University of Munich, Germany
+Email: ∗‡tianlun.hu@nokia.com, ∗qi.liao@nokia-bell-labs.com, †qiang.liu@unl.edu, ‡carle@net.in.tum.de
+Abstract—Deep reinforcement learning (DRL) has been in-
+creasingly employed to handle the dynamic and complex re-
+source management in network slicing. The deployment of
+DRL policies in real networks, however, is complicated by
+heterogeneous cell conditions. In this paper, we propose a novel
+transfer learning (TL) aided multi-agent deep reinforcement
+learning (MADRL) approach with inter-agent similarity analysis
+for inter-cell inter-slice resource partitioning. First, we design
+a coordinated MADRL method with information sharing to
+intelligently partition resource to slices and manage inter-cell
+interference. Second, we propose an integrated TL method to
+transfer the learned DRL policies among different local agents
+for accelerating the policy deployment. The method is composed
+of a new domain and task similarity measurement approach and
+a new knowledge transfer approach, which resolves the problem
+of from whom to transfer and how to transfer. We evaluated the
+proposed solution with extensive simulations in a system-level
+simulator and show that our approach outperforms the state-
+of-the-art solutions in terms of performance, convergence speed
+and sample efficiency. Moreover, by applying TL, we achieve an
+additional gain over 27% higher than the coordinated MADRL
+approach without TL.
+I. INTRODUCTION
+Network slicing is the key technique in 5G and beyond
+which enables network operators to support a variety of
+emerging network services and applications, e.g., autonomous
+driving, metaverse, and machine learning. The virtual net-
+works (aka. network slices) are dynamically created on the
+common network infrastructures, e.g., base stations, which
+are highly customized in different aspects to meet the diverse
+performance requirement of these applications and services.
+As the ever-increasing network deployment, e.g., small cells,
+the traffic of slices and inter-cell interference in radio access
+networks become more dynamic and complex. Conventional
+model-based solutions, e.g., linear programming or convex op-
+timization, can hardly handle the ever-complicating resource
+management problem.
+Recent advances in machine learning, especially deep rein-
+forcement learning (DRL) [1], [2], has shown a promising
+capability to deal with the dynamic and high-dimensional
+networking problems. The machine learning techniques, as
+model-free approaches, learn from historical interactions with
+the network, which require no prior knowledge, e.g., mathe-
+matical models. Several works studied to formulate resource
+management problems as Markov decision process (MDP)s,
+which are then solved by using DRL to derive a central-
+ized policy with global observations of the network. As
+the network scale grows, the action and state space of the
+centralized problem increases exponentially, which challenges
+the convergence and sample efficiency of DRL. Multi-agent
+deep reinforcement learning (MADRL) [3], [4] has been
+exploited to address this issue, which creates and trains
+multiple cooperative DRL agents, where each DRL agent
+focuses on an individual site or cell. However, training all
+individual DRL agents from scratch can still be costly and
+time-consuming, e.g., expensive queries with real networks,
+and unstable environments from the perspective of individual
+DRL agents.
+Recently, transfer learning (TL) [5] based methods have
+been increasingly studied to improve the sample efficiency
+and model reproducibility in the broad machine learning fields
+[6]–[8]. The basic idea of TL is to utilize prior knowledge
+from prelearned tasks to benefit the training process in new
+tasks. For example, the resource partitioning policy of a cell
+can be transferred to another cell when they share similar
+network settings, e.g., bandwidth, transmit power, and traffic
+pattern. Generally, there are several questions to be answered
+before using TL methods, i.e., what to transfer, from whom to
+transfer, and how to transfer. Existing TL methods are mostly
+focused on supervised machine learning, e.g., computer vision
+and natural language processing [9], which provide limited
+insights on applying in DRL tasks [10]–[13]. Therefore, it
+is imperative to study how TL improves the performance of
+MADRL in terms of sample efficiency and fine-tune costs, in
+the inter-cell resource partitioning problem.
+In this paper, we proposed a novel TL aided MADRL
+approach with domain similarity analysis for inter-slice re-
+source partitioning. First, we design a coordinated MADRL
+method for inter-cell resource partitioning problems in net-
+work slicing, where DRL agents share local information with
+each other to mitigate inter-cell interference. The objective
+of MADRL is to maximize the satisfaction level of per-
+slice service requirements in terms of average user throughput
+and delay in each cell. Second, we design an integrated TL
+method to transfer the learned DRL policies among different
+agents for accelerating the policy deployment, where the new
+method consists of two parts. On the one hand, we propose a
+feature-based inter-agent similarity analysis approach, which
+measures the domain and task difference by extracting rep-
+resentative feature distributions in latent space. On the other
+hand, we propose a new knowledge transfer approach with
+the combined model (policy) and instance transfer. The main
+contributions of this paper are summarized as follows:
+• We design a coordinated MADRL method for the inter-
+cell resource partitioning problem in network slicing.
+• We design a novel inter-agent similarity analysis ap-
+proach, based on the features extracted by variational
+auto-encoder (VAE) to evaluate both domain and task
+similarity between two reinforcement learning agents.
+• We design a new knowledge transfer approach that
+combines the model (policy) and instance transfer from
+arXiv:2301.03262v1 [cs.NI] 9 Jan 2023
+
+Figure 1: Dynamic multi-cell slicing resource partitioning
+the selected source agent to the target agent.
+• We evaluate the performance of the proposed solution
+with extensive simulations in a system-level simulator.
+The results show that, by applying TL, we achieve an
+additional gain over 27% higher than the coordinated
+MADRL approach without TL. Moreover, the perfor-
+mance gain achieved by TL is more significant in the
+low-data regime.
+II. SYSTEM MODEL AND DEFINITIONS
+We consider a network consisting of a set of cells K :=
+{1, 2, . . . , K} and a set of slices N := {1, 2, . . . , N}. Each
+slice n ∈ N has predefined average user throughput and delay
+requirements, denoted as φ∗
+n and d∗
+n respectively. The network
+system runs on discrete time slots t ∈ N0. As illustrated in
+Fig. 1, network operation and maintenance (O&M) adapts the
+inter-slice resource partitioning for all cells to provide per-
+slice resource budgets to each cell periodically. Then, within
+each cell, the radio access network (RAN) scheduler uses
+the provided resource budgets as constraints and performs
+resource scheduling and physical resource block (PRB) al-
+location. In this paper, we focus on the inter-cell inter-slice
+resource partitioning problem in network O&M.
+Considering the diverse slice requirements and dynamic
+network conditions, we model the multi-cell resource par-
+titioning system as a set of K distributed MDPs M :=
+{M1, ..., MK}, with Mk := {Sk, Ak, Pk(·), rk(·), γk} de-
+fined for each agent k ∈ K (with a slight abuse of notation,
+hereafter we use k for cell and agent interchangeably). Sk
+and Ak denote the state space and action space respectively.
+Pk(·) : Sk × Ak × Sk → [0, 1] is the transition probability
+over Sk and Ak for cell k. rk : Sk × Ak → R is defined
+as the reward function which evaluates the network service
+of all slices in cell k and γk denotes the discount factor for
+cumulative reward calculation.
+At each time step t, agent k collects state sk(t) ∈ Sk
+and decides an action ak(t) ∈ Ak according to policy
+πk
+: Sk
+→ Ak, which indicates the per-slice resource
+partitioning ratio ak,n ∈ [0, 1] for n ∈ N while aligning with
+inter-slice resource constraints. Thus, the local action space
+Ak yields
+Ak :=
+�
+ak
+����ak,n ∈ [0, 1], ∀n ∈ N;
+N
+�
+n=1
+ak,n = 1
+�
+.
+(1)
+For each cell k ∈ K, our objective is to maximize the
+minimum service satisfaction level in terms of average user
+throughput and delay (φ∗
+n, d∗
+n) over all slices. Thus, for each
+agent k, we define the local reward function based on the
+observed per-slice average user throughput φk,n(t) and delay
+dk,n(t) at time t as
+rk(t) := min
+n∈N min
+�
+φk,n(t)
+φ∗
+k,n
+,
+d∗
+k,n
+dk,n(t), 1
+�
+.
+(2)
+The reward formulation drops below 1 when the actual
+average throughput or delay of any slices fails to fulfill the
+requirements. Note that the reward is upper bounded by 1 even
+if all slices achieve better performances than the requirements,
+to achieve more efficient resource utilization. The second item
+in (2) is inversely proportional to the actual delay, namely,
+if the delay is longer than required this term is lower than 1.
+III. PROBLEM FORMULATION
+The Reinforcement Learning Problem: The problem is
+to find a policy πk : Sk → Ak for each k ∈ K that predicts
+optimal inter-slice resource partitioning ak(t) ∈ Ak base
+on the local state sk(t) ∈ Sk dynamically, to maximize the
+expectation of the cumulative discounted reward rk(t) defined
+in (2), in a finite time horizon T. The problem is given by:
+max
+πk;ak(t)∈Ak Eπk
+� T
+�
+t=0
+γt
+krk
+�
+sk(t), ak(t)
+�
+�
+, ∀k ∈ K,
+(3)
+where Ak is defined in (3).
+In our previous work [14], we proposed a coordinated
+multi-agent DRL approach to transform an MADRL problem
+to the distributed DRL problem similar to (3), where the ex-
+tracted information from neighboring cells is included into the
+state observation to better capture the inter-agent dependency.
+However, training all local agents in parallel from scratch can
+be costly and time-consuming. Moreover, the trained models
+are sensitive to environment changes and the retraining cost
+can be high.
+Thus, in this paper, we raise the following new questions:
+Can we reuse the knowledge in a pretrained model? When
+is the knowledge transferable? And, most importantly, how to
+transfer the gained knowledge from one agent to another?
+The Transfer Learning Problem: To tackle the transfer
+learning problem, let us first introduce two definitions domain
+and task in the context of reinforcement learning.
+A domain D := {S, P(s)} consists of a state feature space
+S and its probability distribution P(s), for s ∈ S. A task
+T := {A, π(·)} consists of the action space A and a policy
+function π : S → A.
+Thus, our inter-agent transfer learning problem is to find
+the optimal source agent among a set of pretrained agents,
+and transfer its knowledge (pretrained model and collected
+instances) to the target agent, such that problem (3) can be
+solved in the target agent with fast convergence and limited
+amount of samples. In particular, the problem is defined in
+Problem 1.
+Problem 1. Given a set of pretrained source agents K ⊂ K
+with source domains D(S) :=
+�
+D(S)
+i
+: i ∈ K
+�
+and pretrained
+tasks T (S) :=
+�
+T (S)
+i
+: i ∈ K
+�
+, also given any target agent
+k /∈ K with target domain D(T )
+k
+and untrained task T (T )
+k
+, find
+the optimal source agent i∗
+k ∈ K for target agent k to transfer
+knowledge such that
+i∗
+k :=
+arg max
+πk|π(0)
+k
+=Λ
+�
+π(S)
+i
+�
+;
+i∈K
+Eπk
+� T
+�
+t=0
+γt
+krk
+�
+sk(t), ak(t)
+�
+�
+(4)
+s.t. (sk, ak) ∈ Γ
+�
+D(S)
+i
+, D(T )
+k
+, A(S)
+i
+, A(T )
+k
+�
+,
+where Λ
+�
+π(S)
+i
+�
+is the policy transfer strategy which maps a
+pretrained source policy π(S)
+i
+to the initial target policy π(0)
+k ,
+
+JRLLC
+O&M
+Inter-cell inter-slice resource partitioning
+eMBB
+mMTO
+Slice resource budgets for each cell
+gNB 3
+gNB 1
+gNB 2while Γ
+�
+D(S)
+i
+, D(T )
+k
+, A(S)
+i
+, A(T )
+k
+�
+is the instance transfer
+strategy which selects the instances from the source agent,
+combines them with the experienced instances from the target
+agent, and saves them in the replay buffer for model training
+or fine-tuning in the target agent. More details about the
+transfer learning strategies will be given in Section IV-C.
+IV. PROPOSED SOLUTIONS
+In this section, we first present a distributed MADRL
+approach to solve the slicing resource partitioning problem
+in (3). Then, to solve problem (4) to find the optimal source
+agent, we propose a novel approach to inter-agent similarity
+analysis based on the extracted features using VAE. Finally,
+for inter-agent transfer learning, we introduce transfer learning
+strategy which combines the model (policy) transfer and
+instance transfer.
+A. Coordinated MADRL Approach
+As stated in (3) , the distributed DRL approach allows each
+agent to learn a local policy and makes its own decision on
+inter-slice resource partitioning based on local observation.
+Compared with the centralized DRL approaches, distributed
+approaches reduce the state and action spaces and significantly
+accelerate the training progress. However, local observation
+alone cannot capture the inter-cell dependencies and provide
+sufficient information to achieve the globally optimal solution.
+Thus, we proposed in [14] a distributed DRL approach with
+inter-agent coordination which keeps the low model com-
+plexity while including the extracted information from neigh-
+boring cells to capture the inter-cell interference. We briefly
+summarize the coordinated distributed DRL approach below,
+because we would like to focus on the main contribution,
+namely, the inter-agent transfer learning, in this paper. For
+more details, readers are referred to our previous work [14].
+Each local agent k observes a local state s′
+k, which contains
+the following network measurements:
+• Per-slice average user throughput {φk,n : n ∈ N};
+• Per-slice network load {lk,n : n ∈ N};
+• Per-slice number of users {uk,n : n ∈ N}.
+Thus, with the above-defined three slice-specific features,
+the local state s′
+k has the dimension of 3N. Additionally,
+to better capture the inter-cell dependencies and estimate
+the global network performance, we introduce an inter-agent
+coordination mechanism through network information sharing
+among agents. Let each agent k broadcast a message mk to
+its neighboring group of agents, denoted by Kk, which means,
+each agent k receives a collection of messages mk := [mi :
+i ∈ Kk] ∈ RZ(m). Instead of using all received messages in
+mk, we propose to to extract useful information ck ∈ RZ(c)
+to remain the low model complexity. We aim to find an
+feature extractor g : RZ(m) → RZ(c) : mk → ck, such that
+Z(c) ≪ Z(m). Then, we include the extracted features from
+the shared messages into the local state: sk := [s′
+k, ck].
+Knowing that the inter-agent dependencies are mainly
+caused by inter-cell interference based on cell load coupling
+[15], we propose to let each cell k share its per-slice load
+lk,n, ∀n ∈ N to its neighboring cell. Then, we compute the
+extracted information ck as the average per-slice neighboring
+load. Namely, we define a deterministic feature extractor,
+given by:
+Figure 2: Variational autoencoder
+gk :RN|Kk| → RN : [li,n : n ∈ N, i ∈ Kk] �→ ck(t)
+with ck(t) :=
+�
+1
+|Kk|
+�
+i∈Kk
+li,n(t) : n ∈ N
+�
+.
+(5)
+With the extended local state including the inter-agent
+shared information, we can use classical DRL approaches,
+e.g., the actor-critic algorithms such as Twin Delayed Deep
+Deterministic policy gradient (TD3) [16] to solve (3).
+B. Integrated TL with Similarity Analysis
+The distributed DRL approach introduced in Section IV-A
+allows us to derive a set of pretrained local agents. Still, given
+a target cell k, e.g., a newly deployed cell, or an existing cell
+but with changed environment, more questions need to be
+answered: Can we transfer the prelearned knowledge from
+at least one of the pretrained agents? Which source cell
+provides the most transferable information? How to transfer
+the knowledge?
+To solve the transfer learning problem in (4), we develop
+a distance measure Di,k to quantify the inter-agent similarity
+between a source agent i and a target agent k. We aim to
+transfer the knowledge from the source agent with the highest
+similarity (reflected by the lowest distance measure).
+The ideal approach to analyze the domain and task similar-
+ity between two agents is to obtain their probability distribu-
+tions of the state P(s) and derive the conditional probability
+distribution P(a|s). However, the major challenge here lies
+in the limited samples in the target agent. Considering that
+the target agent is a newly deployed agent, there is no
+information available about its policy P(a|s), and P(s) is very
+biased, because all samples are collected under the default
+configurations (i.e., constant actions).
+Thus, we need to design a distance measure constrained by
+very limited and bias samples in the target agent, without any
+information about its policy P(a|s). Our idea is to derive and
+compare the joint state and reward distribution under the
+same default action a′, P (s, r|a = a′), in both source and
+target agent. The rationale behind this is that, when applying
+the actor-critic-based DRL architecture, the critic function
+estimates the Q value Qπ(a, s) based on action and state.
+Hence, the conditional probability P(r|s, a) should provide
+useful information of the policy. With a = a′, we can consider
+to estimate P(r|s, a = a′). To efficiently capture the informa-
+tion for both domain similarity (based on P(s|a = a′)) and
+task/policy similarity (based on P(r|s, a = a′)), we propose
+to estimate the joint probability P(s, r|a = a′) = P(r|s, a =
+a′)P(s|a = a′).
+Sample collection: To estimate the distance between
+P(s, r|a = a′) of both the source and target agents, we use
+all available samples from the target agent k under the default
+action a′, Xk = {(sk(n), rk(n))ak(n)=a′ : n = 1, . . . , Nk},
+and select a subset of the samples from the source agent i with
+
+Neural
+Neural
+Network
+Network
+u,o
+Decoder
+Encoderthe same default action Xi = {(si(n), ri(n))ai(n)=a′ : n =
+1, . . . , Ni}. Note that in this subsection we slightly abuse the
+notation by using n as index of samples, and Nk as number
+of samples with default action collected from agent k.
+Feature extraction with VAE: To extract the representative
+features from the high-dimension vector [s, r], we propose to
+apply VAE [17] to map the samples into a low dimensional
+latent space. As Fig. 2 illustrates, for each sample x :=
+[s, r] ∈ X, the encoder of VAE estimates an approximated
+distribution P(z) in latent space Z as a multi-variate Gaussian
+distribution with N(µ, diag(σ)), where diag denotes the
+diagonal matrix. The decoder samples a latent variable z ∈ Z
+from the approximated distribution z ∼ N(µ, diag(σ)) and
+outputs a reconstructed sample ˆx by training on the following
+loss function:
+L :=∥x − ˆx∥2+
+α · DKL (N(µ, diag(σ))∥N(0, diag(1))) ,
+(6)
+where α is the weight factor and DKL denotes the Kullback-
+Leibler (KL) divergence.
+Inter-agent similarity analysis: Since VAE does not di-
+rectly provide the probability distribution function P(x), we
+propose to utilize the extracted features in the latent space
+to evaluate the inter-agent similarity. Considering the limited
+amount of samples (only those under default action), we
+propose to train a general VAE model based on the samples
+from all candidate source agents and the target agent, e.g.,
+X = �
+j∈K∪{k} Xj. The idea is to extract the latent features
+from samples from all relevant agents with a general encoder
+and to distinguish the agents within a common latent space.
+Thus, for each sample xn ∈ X, we can derive its ex-
+tracted features, i.e., the posterior distribution P(zn|xn) =
+N(µn, diag(σn)). We denote the extracted latent space for
+agent k by Zk. Next, we can measure the inter-agent distance
+between an arbitrary source agent i and target agent k by
+calculating the KL divergence based on the extracted latent
+variables from their collected samples:
+Di,k :=
+1
+NiNk
+·
+�
+(µn,σn)∈Zi;
+(µm,σm)∈Zk
+DKL (N(µn, diag(σn))∥N(µm, diag(σm))) .
+(7)
+This requires to compute the KL divergence of every pair of
+samples (n, m) for n ∈ Xi and m ∈ Xk, which could be
+computing intensive.
+Note that they are both Gaussian distributions, we can
+efficiently compute them with closed-form expression (as will
+be shown later in (8)). Besides, from our experiment, we
+observed that σn → 0 for nearly all the collected samples
+xn ∈ X, i.e., their variances are extremely small (to the level
+below 10e − 5 from our observation). Thus, for our problem,
+we can use a trick to evaluate the distance measure more
+efficiently based on the following lemma.
+Lemma 1. Given two multi-variate Gaussian distributions
+p = N(µn, Σn) and q = N(µm, Σm), where µn, µm ∈ RL,
+Σn = Σm = diag(σ) ∈ RL×L and every entry of σ is
+equal to a small positive constant σ ≪ 1, the KL divergence
+DKL(p||q) is proportional to �L
+l=1(µn,l − µm,l)2.
+Proof. It is easy to derive that
+DKL(p∥q) =1
+2
+�
+log |Σn|
+|Σm| − L+
+(µn − µm)T Σ−1
+m (µn − µm)+
+Tr
+�
+Σ−1
+m Σn
+� �
+.
+(8)
+Because Σn = Σm = diag([σ2, ..., σ2]), we have the first
+term in (8) equals to 0, and the last term equals to L. Thus,
+we obtain
+DKL(p∥q) =
+1
+2σ2
+L
+�
+l=1
+(µn,l − µm,l)2.
+(9)
+With Lemma 1, we can measure the distance between two
+agents more efficiently, based on the extracted µn and µm
+in the source and target latent spaces. Thus, to solve Problem
+(III.1), we propose to choose the source agent:
+i∗
+k := arg min
+i∈K
+Di,k,
+(10)
+where Di,k is computed based on (7) and (9).
+C. Integrated Transfer Learning Approach
+In general, the prelearned knowledge can be transferred
+from a source agent i to the target agent k with various policy
+transfer strategies Λ(·) and instance transfer strategy Γ(·):
+• Model transfer: The policy transfer strategy Λ(·) simply
+initializes the target agent’s policy π(0)
+k
+by loading the pa-
+rameters (e.g., weights of the pretrained neural networks)
+of the pretrained policy π(S)
+i
+from the source agent i.
+• Feature transfer: The policy transfer strategy Λ(·) keeps
+partial information extracted from the source agent’s
+pretrained policy π(S)
+i
+. In particular, the target agent
+loads partial of the layers (usually the lower layers) of the
+pretrained neural networks of π(S)
+i
+, while leaving the rest
+of them to be randomly initialized. Then, during training,
+the loaded layers are frozen and only the randomly
+initialized layers are fine-tuned with the instances newly
+collected by the target agent.
+• Instance transfer: The instance transfer strategy Γ(·)
+transfers the collected instances from the source agent i
+to the target agent k and saves them in the target agent’s
+replay buffer. Then, the target agent trains a policy from
+scratch with randomly initialized parameters and mixed
+instances collected from both source and target agents.
+The above-mentioned knowledge from the source domain
+and source task can be transferred separately or in a combined
+manner. In this paper, we propose the integrated transfer
+method with both model and instance transfer. Specifically,
+the target agent k initializes its local policy π(0)
+k
+by loading
+the pretrained policy of the source agent π(S)
+i
+and fine-tunes
+the policy by sampling from the replay buffer containing
+both types of instances: the instances transferred from the
+source agent and those locally experienced. Here, we skip
+the feature transfer because it practically performs well only
+when the similarity between the source domain/task and target
+domain/task is very high. Although this assumption may hold
+for some regression and classification tasks, we empirically
+find that it fails in this context of MADRL.
+V. PERFORMANCE EVALUATION
+In this section, we evaluate the performance of the proposed
+solution within a system-level simulator [18]. The simulator
+
+Figure 3: Traffic mask to imitate the time
+varying network traffic
+Figure 4: Comparing reward during the
+training process
+Figure 5: Comparing CDF of minimum slice
+throughput satisfaction
+achieves a great accuracy in imitating the real network systems
+with configurable user mobility, network slicing traffic and
+topology. In addition, we introduce a traffic-aware baseline
+which allocates resource proportionally to the data traffic
+demand per slice. Note that the baseline assumes perfect
+information about per-cell per-slice traffic demands, which
+provides already very good results.
+1) Network settings: We build a radio access network
+with 4 three-sector sites (i.e., K = 12 cells). All cells are
+deployed using LTE radio technology with 2.6 GHz under
+a realistic radio propagation model Winner+ [19]. Each cell
+has N = 4 slices with diverse per-slice requirements in
+terms of average user throughput and delay. In the cells with
+label 1, 2, 3, 7, 8, 9, we define per-slice average throughput
+requirements of φ∗
+1 = 4 MBit/s, φ∗
+2 = 3 MBit/s, φ∗
+3 = 2
+MBit/s, and φ∗
+4 = 1 MBit/s respectively, and per-slice delay
+requirements of d∗
+1 = 3 ms, d∗
+2 = 2 ms, d∗
+3 = d∗
+4 = 1
+ms. In the cells with label 4, 5, 6, 10, 11, 12, we define per-
+slice throughput requirements as φ∗
+1 = 2.5 MBit/s, φ∗
+2 = 2
+MBit/s, φ∗
+3 = 1.5 MBit/s, and φ∗
+4 = 1 MBit/s, and delay
+requirements of d∗
+n = 1 ms, ∀n ∈ N. All cells have the same
+radio bandwidth of 20 MHz.
+We define four groups of user equipment (UE) associated
+to four slices in each cell respectively, each UE group has the
+maximum size of 32 and moves randomly among the defined
+network scenario. To mimic dynamic behavior of real user
+traffic, we apply a varying traffic mask τn(t) ∈ [0, 1] to each
+slice to scale the total number of UEs in each cell, Fig. 3
+shows the traffic mask in first 200 steps.
+2) DRL training configuration: For MADRL training, we
+implemented TD3 algorithm at each local agent using multi-
+layer perception (MLP) architecture for actor-critic networks.
+In each TD3 model, both actor and critic neural works consist
+of two layers with the number of neurons as (48, 24) and
+(64, 24) respectively. The learning rates of actor and critic
+are 0.0005 and 0.001 accordingly with Adam optimizer and
+training batch size of 32. We set the discount factor as
+γ = 0.1, since the current action has stronger impact on
+instant network performance than future observation. As for
+the training, for distributed DRL agents we applied 3000 steps
+for exploration, 5500 steps for training, and final 250 steps for
+evaluation. For TL training process, we apply the same model
+setups as DRL approaches, while only setting 4000 steps for
+training and 250 for evaluation since knowledge transfer save
+the time for exploration.
+3) Comparing DRL to TL aided approach: In Fig. 4 we
+compare the evolution of reward during the training processes
+among the baseline, DRL approach (proposed in Section
+IV-A), and TL approaches when transferred from source agent
+with low and high similarity (proposed in Section IV-B and
+IV-C), respectively. For DRL, we present the first 4000 step,
+i.e., the same training time as TL approaches with solid line
+and the rest training curve with dashed line.
+As shown in Fig. 4, the distributed DRL approach learns to
+achieve similar reward as baseline after a lengthy exploration
+phase, while both TL approaches start with much higher start
+compared to DRL. After a short fine-tuning period, the TL
+approaches outperform the baseline with higher robustness,
+especially during the period with higher traffic demands
+and strong inter-cell interference where baseline has sharp
+performance degradation. Besides, in comparison between the
+TL from agents with different similarity measure, we observe
+that with higher similarity, TL provides higher start at the
+early stage of training, while both of them converge to similar
+performance after the training converges.
+For performance evaluation, we compare the statistical
+results on minimum per slice throughput satisfaction level and
+maximum per slice delay, respectively, among all cells among
+the methods baseline, distributed DRL and the proposed TL
+approach after convergence. Fig. 5 illustrated the empirical
+complementary cumulative distribution function (CDF) which
+equals 1 − FX(x) where FX(x) is the CDF of minimum
+per slice throughput satisfaction level. We observe that the
+TL approach provides the best performance comparing to
+others by achieving only about 12% fail to satisfy 0.95 of
+the requirement, while converged DRL and baseline conclude
+19% and 25% failure rate respectively. By average satisfaction
+level, the TL approach conclude 0.92 while DRL and baseline
+only provide 0.90 and 0.87. Similar observation can be made
+from Fig. 6, which illustrates the CDF of maximum slice delay
+in ms. The TL approach provides 1.5 ms maximum average
+per-slice delay, while DRL achieves 1.7 ms and baseline
+achieves 1.8 ms.
+4) Inter-agent similarity analysis: We implemented the
+similarity analysis method introduced in Section IV-B with
+a VAE model in MLP architecture, both networks of encoder
+and decoder consist of 3 layers with number of neurons as
+(64, 24, 4) and (4, 24, 64) respectively. To achieve a good
+trade-off between low dimensional latency space and accurate
+reconstruction with VAE, we map the original sample x ∈ R17
+to the latent variable z ∈ R4.
+Fig. 7 illustrates the results of inter-agent similarity analysis
+as a metric of distance measure proposed in (7). It shows that
+our proposed method can distinguish cells with different per-
+slice service quality requirements and gather the cells with
+similar joint state-reward distribution.
+5) Dependence of TL performance on distance measure:
+In Fig. 8 we compare the benefits of TL in training process
+by transferring knowledge from source agents with different
+average inter-agent distance measures. The TL gains are
+
+1.0
+0.8
+Traffic Mask
+0.6
+0.4
+Slice 1
+Slice 2
+0.2
+Slice 3
+Slice 4
+0
+25
+50
+75
+100
+125
+150
+175
+200
+Timestamp0.9
+0.8
+Reward
+0.7
+Baseline
+0.6
+DRL
+TL - High Sim
+0.5
+TL-Low Sim
+0
+2000
+4000
+6000
+8000
+Timestamp1.0
+0.9
+Complementary
+0.8
+0.7
+0.6
+TL
+0.5
+Baseline
+DRL
+0.4
+1.000
+0.975
+0.950
+0.925
+0.900
+0.875
+0.850
+0.825
+0.800Figure 6: Comparing CDF of maximum slice
+delay
+Figure 7: Inter-agent distance measure
+Figure 8: TL performance gain depending on
+distance measure
+derived by comparing the reward to DRL approach at the
+same training steps. The results show that before 200 steps
+of TL training, the TL approaches with the lowest distance
+measure provides about 3% higher gain than the one with the
+largest distance. As the training process continues, the gains
+in all TL approaches increase with local fine-tuning and the
+difference between transferring from highly similar and less
+similar agents is getting smaller. However, TL from the most
+similar agent proyvides higher gains for all training steps.
+6) Key Takeaways: : We summarized the takeaways from
+numerical results as follows:
+• All distributed DRL-based approaches achieve better per-
+slice network service than the traffic-aware baseline after
+convergence. However, the TL schemes outperform the
+conventional DRL approach in terms of convergence rate,
+initial and converged performance.
+• Our propose VAE-based similarity measure well quan-
+tifies the distance between agents and can be used to
+suggest a mapping from the defined distance measure to
+the transfer learning performance gain.
+• The difference between the gains achieved by TL from
+the highly similar and the less similar agents is more
+significant when the number of training steps is low
+(i.e., with limited online training samples). Although the
+advantage of transferring from a highly similar agent
+over a less similar agent decreases when the number of
+online training steps increases, a slight performance gain
+is always achieved by transferring knowledge from the
+most similar source agent.
+VI. CONCLUSION
+In this paper, we formulated the dynamic inter-slice re-
+source partitioning problem to optimize the network require-
+ment satisfaction level of all slices in each cell. To tackle the
+inter-cell interference, we proposed a coordinated MADRL
+method with the coordination scheme of information sharing.
+We proposed a novel integrated TL method to transfer the
+learned DRL policies among different local agents for accel-
+erating the policy deployment. The method is accomplished
+by a new inter-agent similarity measurement approach and
+a new knowledge transfer approach. We evaluated the pro-
+posed solution with extensive simulations in a system-level
+simulator, where the results show our approach outperforms
+conventional DRL solutions.
+ACKNOWLEDGMENT
+This work was supported by the German Federal Min-
+istry of Education and Research (BMBF) project KICK
+[16KIS1102K].
+REFERENCES
+[1] Y. Liu, J. Ding, and X. Liu, “A constrained reinforcement learning
+based approach for network slicing,” in 2020 IEEE 28th International
+Conference on Network Protocols (ICNP), 2020, pp. 1–6.
+[2] Q. Liu, T. Han, N. Zhang, and Y. Wang, “DeepSlicing: Deep reinforce-
+ment learning assisted resource allocation for network slicing,” in IEEE
+GLOBECOM, 2020, pp. 1–6.
+[3] N. Zhao, Y.-C. Liang, D. T. Niyato, Y. Pei, M. Wu, and Y. Jiang, “Deep
+reinforcement learning for user association and resource allocation
+in heterogeneous cellular networks,” IEEE Transactions on Wireless
+Communications, vol. 18, pp. 5141–5152, 2019.
+[4] Y. Shao, R. Li, Z. Zhao, and H. Zhang, “Graph attention network-based
+drl for network slicing management in dense cellular networks,” IEEE
+WCNC, pp. 1–6, 2021.
+[5] S. J. Pan and Q. Yang, “A survey on transfer learning,” IEEE Transac-
+tions on knowledge and data engineering, vol. 22, no. 10, pp. 1345–
+1359, 2009.
+[6] C. T. Nguyen et al., “Transfer learning for future wireless networks: A
+comprehensive survey,” arXiv preprint arXiv:2102.07572, 2021.
+[7] M. Wang, Y. Lin, Q. Tian, and G. Si, “Transfer learning promotes 6g
+wireless communications: recent advances and future challenges,” IEEE
+Transactions on Reliability, 2021.
+[8] C. Parera, Q. Liao et al., “Transfer learning for tilt-dependent radio
+map prediction,” IEEE Transactions on Cognitive Communications and
+Networking, vol. 6, no. 2, pp. 829–843, 2020.
+[9] F. Zhuang, Z. Qi, K. Duan, D. Xi, Y. Zhu, H. Zhu, H. Xiong, and
+Q. He, “A comprehensive survey on transfer learning,” Proceedings of
+the IEEE, vol. 109, no. 1, pp. 43–76, 2020.
+[10] M. E. Taylor and P. Stone, “Transfer learning for reinforcement learning
+domains: A survey,” J. Mach. Learn. Res., vol. 10, pp. 1633–1685, 2009.
+[11] Z. Zhu, K. Lin, and J. Zhou, “Transfer learning in deep reinforcement
+learning: A survey,” CoRR, vol. abs/2009.07888, 2020. [Online].
+Available: https://arxiv.org/abs/2009.07888
+[12] A. M. Nagib, H. Abou-zeid, and H. S. Hassanein, “Transfer learning-
+based accelerated deep reinforcement learning for 5G RAN slicing,”
+IEEE 46th LCN, pp. 249–256, 2021.
+[13] T. Mai, H. Yao et al., “Transfer reinforcement learning aided distributed
+network slicing resource optimization in industrial IoT,” IEEE Trans-
+actions on Industrial Informatics, 2021.
+[14] T. Hu, Q. Liao, Q. Liu, D. Wellington, and G. Carle, “Inter-cell slicing
+resource partitioning via coordinated multi-agent deep reinforcement
+learning,” in IEEE ICC, 2022.
+[15] R. L. G. Cavalcante, Q. Liao, and S. Sta´nczak, “Connections between
+spectral properties of asymptotic mappings and solutions to wireless
+network problems,” IEEE Transactions on Signal Processing, vol. 67,
+pp. 2747–2760, 2019.
+[16] S. Fujimoto et al., “Addressing function approximation error in Actor-
+Critic methods,” ArXiv, vol. abs/1802.09477, 2018.
+[17] K. Sohn, H. Lee, and X. Yan, “Learning structured output representation
+using deep conditional generative models,” in NIPS, 2015.
+[18] Nokia Siemens Networks, White paper: Self-organizing network (SON):
+Introducing the Nokia Siemens networks SON suite-an efficient, future-
+proof platform for SON.
+Technical report, October, 2009.
+[19] J. Meinil¨a, P. Ky¨osti, L. Hentil¨a, T. J¨ams¨a, E. Suikkanen, E. Kunnari,
+and M. Narandˇzi´c, Wireless World Initiative New Radio - Winner+.
+Technical report, 2010.
+
+1.0
+CDF
+0.9
+1 Complementary
+0.8
+0.7
+0.6
+TL
+0.5
+Baseline
+DRL
+0.4
+0.0010
+0.0012
+0.0014
+0.0016
+0.0018
+0.002
+Max Slice Delay [in s]2
+0.175
+m
+0.150
+4
+5
+0.125
+6
+0.100
+7
+-080
+0.075
+-6
+0.050
+0.025
+2
+0.000
+1
+2
+5
+6
+>
+10
+11
+1227
+26
+[in
+25
+Gain
+24
+after 100 steps
+TL
+after 200 steps
+23
+after 500 steps
+after1000 steps
+22
+after 2000 steps
+0.003
+0.011
+0.081
+0.117
+Distance Measure
\ No newline at end of file
diff --git a/stE1T4oBgHgl3EQfjgQE/content/tmp_files/load_file.txt b/stE1T4oBgHgl3EQfjgQE/content/tmp_files/load_file.txt
new file mode 100644
index 0000000000000000000000000000000000000000..5d94595f86ebe5196ed2bfbdfec1be7dd26b9e8d
--- /dev/null
+++ b/stE1T4oBgHgl3EQfjgQE/content/tmp_files/load_file.txt
@@ -0,0 +1,485 @@
+filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf,len=484
+page_content='Network Slicing via Transfer Learning aided Distributed Deep Reinforcement Learning Tianlun Hu∗‡, Qi Liao∗, Qiang Liu†, and Georg Carle‡ ∗Nokia Bell Labs, Stuttgart, Germany †University of Nebraska Lincoln, United States ‡Technical University of Munich, Germany Email: ∗‡tianlun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='hu@nokia.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='com, ∗qi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='liao@nokia-bell-labs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='com, †qiang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='liu@unl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='edu, ‡carle@net.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='in.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='tum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='de Abstract—Deep reinforcement learning (DRL) has been in- creasingly employed to handle the dynamic and complex re- source management in network slicing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' The deployment of DRL policies in real networks, however, is complicated by heterogeneous cell conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' In this paper, we propose a novel transfer learning (TL) aided multi-agent deep reinforcement learning (MADRL) approach with inter-agent similarity analysis for inter-cell inter-slice resource partitioning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' First, we design a coordinated MADRL method with information sharing to intelligently partition resource to slices and manage inter-cell interference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Second, we propose an integrated TL method to transfer the learned DRL policies among different local agents for accelerating the policy deployment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' The method is composed of a new domain and task similarity measurement approach and a new knowledge transfer approach, which resolves the problem of from whom to transfer and how to transfer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' We evaluated the proposed solution with extensive simulations in a system-level simulator and show that our approach outperforms the state- of-the-art solutions in terms of performance, convergence speed and sample efficiency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Moreover, by applying TL, we achieve an additional gain over 27% higher than the coordinated MADRL approach without TL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' INTRODUCTION Network slicing is the key technique in 5G and beyond which enables network operators to support a variety of emerging network services and applications, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=', autonomous driving, metaverse, and machine learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' The virtual net- works (aka.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' network slices) are dynamically created on the common network infrastructures, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=', base stations, which are highly customized in different aspects to meet the diverse performance requirement of these applications and services.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' As the ever-increasing network deployment, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=', small cells, the traffic of slices and inter-cell interference in radio access networks become more dynamic and complex.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Conventional model-based solutions, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=', linear programming or convex op- timization, can hardly handle the ever-complicating resource management problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Recent advances in machine learning, especially deep rein- forcement learning (DRL) [1], [2], has shown a promising capability to deal with the dynamic and high-dimensional networking problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' The machine learning techniques, as model-free approaches, learn from historical interactions with the network, which require no prior knowledge, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=', mathe- matical models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Several works studied to formulate resource management problems as Markov decision process (MDP)s, which are then solved by using DRL to derive a central- ized policy with global observations of the network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' As the network scale grows, the action and state space of the centralized problem increases exponentially, which challenges the convergence and sample efficiency of DRL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Multi-agent deep reinforcement learning (MADRL) [3], [4] has been exploited to address this issue, which creates and trains multiple cooperative DRL agents, where each DRL agent focuses on an individual site or cell.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' However, training all individual DRL agents from scratch can still be costly and time-consuming, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=', expensive queries with real networks, and unstable environments from the perspective of individual DRL agents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Recently, transfer learning (TL) [5] based methods have been increasingly studied to improve the sample efficiency and model reproducibility in the broad machine learning fields [6]–[8].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' The basic idea of TL is to utilize prior knowledge from prelearned tasks to benefit the training process in new tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' For example, the resource partitioning policy of a cell can be transferred to another cell when they share similar network settings, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=', bandwidth, transmit power, and traffic pattern.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Generally, there are several questions to be answered before using TL methods, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=', what to transfer, from whom to transfer, and how to transfer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Existing TL methods are mostly focused on supervised machine learning, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=', computer vision and natural language processing [9], which provide limited insights on applying in DRL tasks [10]–[13].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Therefore, it is imperative to study how TL improves the performance of MADRL in terms of sample efficiency and fine-tune costs, in the inter-cell resource partitioning problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' In this paper, we proposed a novel TL aided MADRL approach with domain similarity analysis for inter-slice re- source partitioning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' First, we design a coordinated MADRL method for inter-cell resource partitioning problems in net- work slicing, where DRL agents share local information with each other to mitigate inter-cell interference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' The objective of MADRL is to maximize the satisfaction level of per- slice service requirements in terms of average user throughput and delay in each cell.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Second, we design an integrated TL method to transfer the learned DRL policies among different agents for accelerating the policy deployment, where the new method consists of two parts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' On the one hand, we propose a feature-based inter-agent similarity analysis approach, which measures the domain and task difference by extracting rep- resentative feature distributions in latent space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' On the other hand, we propose a new knowledge transfer approach with the combined model (policy) and instance transfer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' The main contributions of this paper are summarized as follows: We design a coordinated MADRL method for the inter- cell resource partitioning problem in network slicing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' We design a novel inter-agent similarity analysis ap- proach, based on the features extracted by variational auto-encoder (VAE) to evaluate both domain and task similarity between two reinforcement learning agents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' We design a new knowledge transfer approach that combines the model (policy) and instance transfer from arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='03262v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='NI] 9 Jan 2023 Figure 1: Dynamic multi-cell slicing resource partitioning the selected source agent to the target agent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' We evaluate the performance of the proposed solution with extensive simulations in a system-level simulator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' The results show that, by applying TL, we achieve an additional gain over 27% higher than the coordinated MADRL approach without TL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Moreover, the perfor- mance gain achieved by TL is more significant in the low-data regime.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' SYSTEM MODEL AND DEFINITIONS We consider a network consisting of a set of cells K := {1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' , K} and a set of slices N := {1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' , N}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Each slice n ∈ N has predefined average user throughput and delay requirements, denoted as φ∗ n and d∗ n respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' The network system runs on discrete time slots t ∈ N0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' As illustrated in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' 1, network operation and maintenance (O&M) adapts the inter-slice resource partitioning for all cells to provide per- slice resource budgets to each cell periodically.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Then, within each cell, the radio access network (RAN) scheduler uses the provided resource budgets as constraints and performs resource scheduling and physical resource block (PRB) al- location.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' In this paper, we focus on the inter-cell inter-slice resource partitioning problem in network O&M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Considering the diverse slice requirements and dynamic network conditions, we model the multi-cell resource par- titioning system as a set of K distributed MDPs M := {M1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=', MK}, with Mk := {Sk, Ak, Pk(·), rk(·), γk} de- fined for each agent k ∈ K (with a slight abuse of notation, hereafter we use k for cell and agent interchangeably).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Sk and Ak denote the state space and action space respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Pk(·) : Sk × Ak × Sk → [0, 1] is the transition probability over Sk and Ak for cell k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' rk : Sk × Ak → R is defined as the reward function which evaluates the network service of all slices in cell k and γk denotes the discount factor for cumulative reward calculation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' At each time step t, agent k collects state sk(t) ∈ Sk and decides an action ak(t) ∈ Ak according to policy πk : Sk → Ak, which indicates the per-slice resource partitioning ratio ak,n ∈ [0, 1] for n ∈ N while aligning with inter-slice resource constraints.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Thus, the local action space Ak yields Ak := � ak ����ak,n ∈ [0, 1], ∀n ∈ N;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' N � n=1 ak,n = 1 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' (1) For each cell k ∈ K, our objective is to maximize the minimum service satisfaction level in terms of average user throughput and delay (φ∗ n, d∗ n) over all slices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Thus, for each agent k, we define the local reward function based on the observed per-slice average user throughput φk,n(t) and delay dk,n(t) at time t as rk(t) := min n∈N min � φk,n(t) φ∗ k,n , d∗ k,n dk,n(t), 1 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' (2) The reward formulation drops below 1 when the actual average throughput or delay of any slices fails to fulfill the requirements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Note that the reward is upper bounded by 1 even if all slices achieve better performances than the requirements, to achieve more efficient resource utilization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' The second item in (2) is inversely proportional to the actual delay, namely, if the delay is longer than required this term is lower than 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' PROBLEM FORMULATION The Reinforcement Learning Problem: The problem is to find a policy πk : Sk → Ak for each k ∈ K that predicts optimal inter-slice resource partitioning ak(t) ∈ Ak base on the local state sk(t) ∈ Sk dynamically, to maximize the expectation of the cumulative discounted reward rk(t) defined in (2), in a finite time horizon T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' The problem is given by: max πk;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='ak(t)∈Ak Eπk � T � t=0 γt krk � sk(t), ak(t) � � , ∀k ∈ K, (3) where Ak is defined in (3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' In our previous work [14], we proposed a coordinated multi-agent DRL approach to transform an MADRL problem to the distributed DRL problem similar to (3), where the ex- tracted information from neighboring cells is included into the state observation to better capture the inter-agent dependency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' However, training all local agents in parallel from scratch can be costly and time-consuming.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Moreover, the trained models are sensitive to environment changes and the retraining cost can be high.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Thus, in this paper, we raise the following new questions: Can we reuse the knowledge in a pretrained model?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' When is the knowledge transferable?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' And, most importantly, how to transfer the gained knowledge from one agent to another?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' The Transfer Learning Problem: To tackle the transfer learning problem, let us first introduce two definitions domain and task in the context of reinforcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' A domain D := {S, P(s)} consists of a state feature space S and its probability distribution P(s), for s ∈ S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' A task T := {A, π(·)} consists of the action space A and a policy function π : S → A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Thus, our inter-agent transfer learning problem is to find the optimal source agent among a set of pretrained agents, and transfer its knowledge (pretrained model and collected instances) to the target agent, such that problem (3) can be solved in the target agent with fast convergence and limited amount of samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' In particular, the problem is defined in Problem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Problem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Given a set of pretrained source agents K ⊂ K with source domains D(S) := � D(S) i : i ∈ K � and pretrained tasks T (S) := � T (S) i : i ∈ K � , also given any target agent k /∈ K with target domain D(T ) k and untrained task T (T ) k , find the optimal source agent i∗ k ∈ K for target agent k to transfer knowledge such that i∗ k := arg max πk|π(0) k =Λ � π(S) i � ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' i∈K Eπk � T � t=0 γt krk � sk(t), ak(t) � � (4) s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' (sk,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' ak) ∈ Γ � D(S) i ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' D(T ) k ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' A(S) i ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' A(T ) k � ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' where Λ � π(S) i � is the policy transfer strategy which maps a pretrained source policy π(S) i to the initial target policy π(0) k ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' JRLLC O&M Inter-cell inter-slice resource partitioning eMBB mMTO Slice resource budgets for each cell gNB 3 gNB 1 gNB 2while Γ � D(S) i ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' D(T ) k ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' A(S) i ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' A(T ) k � is the instance transfer strategy which selects the instances from the source agent,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' combines them with the experienced instances from the target agent,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' and saves them in the replay buffer for model training or fine-tuning in the target agent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' More details about the transfer learning strategies will be given in Section IV-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' PROPOSED SOLUTIONS In this section, we first present a distributed MADRL approach to solve the slicing resource partitioning problem in (3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Then, to solve problem (4) to find the optimal source agent, we propose a novel approach to inter-agent similarity analysis based on the extracted features using VAE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Finally, for inter-agent transfer learning, we introduce transfer learning strategy which combines the model (policy) transfer and instance transfer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Coordinated MADRL Approach As stated in (3) , the distributed DRL approach allows each agent to learn a local policy and makes its own decision on inter-slice resource partitioning based on local observation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Compared with the centralized DRL approaches, distributed approaches reduce the state and action spaces and significantly accelerate the training progress.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' However, local observation alone cannot capture the inter-cell dependencies and provide sufficient information to achieve the globally optimal solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Thus, we proposed in [14] a distributed DRL approach with inter-agent coordination which keeps the low model com- plexity while including the extracted information from neigh- boring cells to capture the inter-cell interference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' We briefly summarize the coordinated distributed DRL approach below, because we would like to focus on the main contribution, namely, the inter-agent transfer learning, in this paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' For more details, readers are referred to our previous work [14].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Each local agent k observes a local state s′ k, which contains the following network measurements: Per-slice average user throughput {φk,n : n ∈ N};' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Per-slice network load {lk,n : n ∈ N};' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Per-slice number of users {uk,n : n ∈ N}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Thus, with the above-defined three slice-specific features, the local state s′ k has the dimension of 3N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Additionally, to better capture the inter-cell dependencies and estimate the global network performance, we introduce an inter-agent coordination mechanism through network information sharing among agents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Let each agent k broadcast a message mk to its neighboring group of agents, denoted by Kk, which means, each agent k receives a collection of messages mk := [mi : i ∈ Kk] ∈ RZ(m).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Instead of using all received messages in mk, we propose to to extract useful information ck ∈ RZ(c) to remain the low model complexity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' We aim to find an feature extractor g : RZ(m) → RZ(c) : mk → ck, such that Z(c) ≪ Z(m).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Then, we include the extracted features from the shared messages into the local state: sk := [s′ k, ck].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Knowing that the inter-agent dependencies are mainly caused by inter-cell interference based on cell load coupling [15], we propose to let each cell k share its per-slice load lk,n, ∀n ∈ N to its neighboring cell.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Then, we compute the extracted information ck as the average per-slice neighboring load.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Namely, we define a deterministic feature extractor, given by: Figure 2: Variational autoencoder gk :RN|Kk| → RN : [li,n : n ∈ N, i ∈ Kk] �→ ck(t) with ck(t) := � 1 |Kk| � i∈Kk li,n(t) : n ∈ N � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' (5) With the extended local state including the inter-agent shared information, we can use classical DRL approaches, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=', the actor-critic algorithms such as Twin Delayed Deep Deterministic policy gradient (TD3) [16] to solve (3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Integrated TL with Similarity Analysis The distributed DRL approach introduced in Section IV-A allows us to derive a set of pretrained local agents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Still, given a target cell k, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=', a newly deployed cell, or an existing cell but with changed environment, more questions need to be answered: Can we transfer the prelearned knowledge from at least one of the pretrained agents?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Which source cell provides the most transferable information?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' How to transfer the knowledge?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' To solve the transfer learning problem in (4), we develop a distance measure Di,k to quantify the inter-agent similarity between a source agent i and a target agent k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' We aim to transfer the knowledge from the source agent with the highest similarity (reflected by the lowest distance measure).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' The ideal approach to analyze the domain and task similar- ity between two agents is to obtain their probability distribu- tions of the state P(s) and derive the conditional probability distribution P(a|s).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' However, the major challenge here lies in the limited samples in the target agent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Considering that the target agent is a newly deployed agent, there is no information available about its policy P(a|s), and P(s) is very biased, because all samples are collected under the default configurations (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=', constant actions).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Thus, we need to design a distance measure constrained by very limited and bias samples in the target agent, without any information about its policy P(a|s).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Our idea is to derive and compare the joint state and reward distribution under the same default action a′, P (s, r|a = a′), in both source and target agent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' The rationale behind this is that, when applying the actor-critic-based DRL architecture, the critic function estimates the Q value Qπ(a, s) based on action and state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Hence, the conditional probability P(r|s, a) should provide useful information of the policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' With a = a′, we can consider to estimate P(r|s, a = a′).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' To efficiently capture the informa- tion for both domain similarity (based on P(s|a = a′)) and task/policy similarity (based on P(r|s, a = a′)), we propose to estimate the joint probability P(s, r|a = a′) = P(r|s, a = a′)P(s|a = a′).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Sample collection: To estimate the distance between P(s, r|a = a′) of both the source and target agents, we use all available samples from the target agent k under the default action a′, Xk = {(sk(n), rk(n))ak(n)=a′ : n = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' , Nk}, and select a subset of the samples from the source agent i with Neural Neural Network Network u,o Decoder Encoderthe same default action Xi = {(si(n), ri(n))ai(n)=a′ : n = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' , Ni}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Note that in this subsection we slightly abuse the notation by using n as index of samples, and Nk as number of samples with default action collected from agent k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Feature extraction with VAE: To extract the representative features from the high-dimension vector [s, r], we propose to apply VAE [17] to map the samples into a low dimensional latent space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' As Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' 2 illustrates, for each sample x := [s, r] ∈ X, the encoder of VAE estimates an approximated distribution P(z) in latent space Z as a multi-variate Gaussian distribution with N(µ, diag(σ)), where diag denotes the diagonal matrix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' The decoder samples a latent variable z ∈ Z from the approximated distribution z ∼ N(µ, diag(σ)) and outputs a reconstructed sample ˆx by training on the following loss function: L :=∥x − ˆx∥2+ α · DKL (N(µ, diag(σ))∥N(0, diag(1))) , (6) where α is the weight factor and DKL denotes the Kullback- Leibler (KL) divergence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Inter-agent similarity analysis: Since VAE does not di- rectly provide the probability distribution function P(x), we propose to utilize the extracted features in the latent space to evaluate the inter-agent similarity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Considering the limited amount of samples (only those under default action), we propose to train a general VAE model based on the samples from all candidate source agents and the target agent, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=', X = � j∈K∪{k} Xj.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' The idea is to extract the latent features from samples from all relevant agents with a general encoder and to distinguish the agents within a common latent space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Thus, for each sample xn ∈ X, we can derive its ex- tracted features, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=', the posterior distribution P(zn|xn) = N(µn, diag(σn)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' We denote the extracted latent space for agent k by Zk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Next, we can measure the inter-agent distance between an arbitrary source agent i and target agent k by calculating the KL divergence based on the extracted latent variables from their collected samples: Di,k := 1 NiNk � (µn,σn)∈Zi;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' (µm,σm)∈Zk DKL (N(µn, diag(σn))∥N(µm, diag(σm))) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' (7) This requires to compute the KL divergence of every pair of samples (n, m) for n ∈ Xi and m ∈ Xk, which could be computing intensive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Note that they are both Gaussian distributions, we can efficiently compute them with closed-form expression (as will be shown later in (8)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Besides, from our experiment, we observed that σn → 0 for nearly all the collected samples xn ∈ X, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=', their variances are extremely small (to the level below 10e − 5 from our observation).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Thus, for our problem, we can use a trick to evaluate the distance measure more efficiently based on the following lemma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Lemma 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Given two multi-variate Gaussian distributions p = N(µn, Σn) and q = N(µm, Σm), where µn, µm ∈ RL, Σn = Σm = diag(σ) ∈ RL×L and every entry of σ is equal to a small positive constant σ ≪ 1, the KL divergence DKL(p||q) is proportional to �L l=1(µn,l − µm,l)2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' It is easy to derive that DKL(p∥q) =1 2 � log |Σn| |Σm| − L+ (µn − µm)T Σ−1 m (µn − µm)+ Tr � Σ−1 m Σn � � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' (8) Because Σn = Σm = diag([σ2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=', σ2]), we have the first term in (8) equals to 0, and the last term equals to L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Thus, we obtain DKL(p∥q) = 1 2σ2 L � l=1 (µn,l − µm,l)2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' (9) With Lemma 1, we can measure the distance between two agents more efficiently, based on the extracted µn and µm in the source and target latent spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Thus, to solve Problem (III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='1), we propose to choose the source agent: i∗ k := arg min i∈K Di,k, (10) where Di,k is computed based on (7) and (9).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Integrated Transfer Learning Approach In general, the prelearned knowledge can be transferred from a source agent i to the target agent k with various policy transfer strategies Λ(·) and instance transfer strategy Γ(·): Model transfer: The policy transfer strategy Λ(·) simply initializes the target agent’s policy π(0) k by loading the pa- rameters (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=', weights of the pretrained neural networks) of the pretrained policy π(S) i from the source agent i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Feature transfer: The policy transfer strategy Λ(·) keeps partial information extracted from the source agent’s pretrained policy π(S) i .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' In particular, the target agent loads partial of the layers (usually the lower layers) of the pretrained neural networks of π(S) i , while leaving the rest of them to be randomly initialized.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Then, during training, the loaded layers are frozen and only the randomly initialized layers are fine-tuned with the instances newly collected by the target agent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Instance transfer: The instance transfer strategy Γ(·) transfers the collected instances from the source agent i to the target agent k and saves them in the target agent’s replay buffer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Then, the target agent trains a policy from scratch with randomly initialized parameters and mixed instances collected from both source and target agents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' The above-mentioned knowledge from the source domain and source task can be transferred separately or in a combined manner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' In this paper, we propose the integrated transfer method with both model and instance transfer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Specifically, the target agent k initializes its local policy π(0) k by loading the pretrained policy of the source agent π(S) i and fine-tunes the policy by sampling from the replay buffer containing both types of instances: the instances transferred from the source agent and those locally experienced.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Here, we skip the feature transfer because it practically performs well only when the similarity between the source domain/task and target domain/task is very high.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Although this assumption may hold for some regression and classification tasks, we empirically find that it fails in this context of MADRL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' PERFORMANCE EVALUATION In this section, we evaluate the performance of the proposed solution within a system-level simulator [18].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' The simulator Figure 3: Traffic mask to imitate the time varying network traffic Figure 4: Comparing reward during the training process Figure 5: Comparing CDF of minimum slice throughput satisfaction achieves a great accuracy in imitating the real network systems with configurable user mobility, network slicing traffic and topology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' In addition, we introduce a traffic-aware baseline which allocates resource proportionally to the data traffic demand per slice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Note that the baseline assumes perfect information about per-cell per-slice traffic demands, which provides already very good results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' 1) Network settings: We build a radio access network with 4 three-sector sites (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=', K = 12 cells).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' All cells are deployed using LTE radio technology with 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='6 GHz under a realistic radio propagation model Winner+ [19].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Each cell has N = 4 slices with diverse per-slice requirements in terms of average user throughput and delay.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' In the cells with label 1, 2, 3, 7, 8, 9, we define per-slice average throughput requirements of φ∗ 1 = 4 MBit/s, φ∗ 2 = 3 MBit/s, φ∗ 3 = 2 MBit/s, and φ∗ 4 = 1 MBit/s respectively, and per-slice delay requirements of d∗ 1 = 3 ms, d∗ 2 = 2 ms, d∗ 3 = d∗ 4 = 1 ms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' In the cells with label 4, 5, 6, 10, 11, 12, we define per- slice throughput requirements as φ∗ 1 = 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='5 MBit/s, φ∗ 2 = 2 MBit/s, φ∗ 3 = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='5 MBit/s, and φ∗ 4 = 1 MBit/s, and delay requirements of d∗ n = 1 ms, ∀n ∈ N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' All cells have the same radio bandwidth of 20 MHz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' We define four groups of user equipment (UE) associated to four slices in each cell respectively, each UE group has the maximum size of 32 and moves randomly among the defined network scenario.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' To mimic dynamic behavior of real user traffic, we apply a varying traffic mask τn(t) ∈ [0, 1] to each slice to scale the total number of UEs in each cell, Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' 3 shows the traffic mask in first 200 steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' 2) DRL training configuration: For MADRL training, we implemented TD3 algorithm at each local agent using multi- layer perception (MLP) architecture for actor-critic networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' In each TD3 model, both actor and critic neural works consist of two layers with the number of neurons as (48, 24) and (64, 24) respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' The learning rates of actor and critic are 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='0005 and 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='001 accordingly with Adam optimizer and training batch size of 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' We set the discount factor as γ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='1, since the current action has stronger impact on instant network performance than future observation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' As for the training, for distributed DRL agents we applied 3000 steps for exploration, 5500 steps for training, and final 250 steps for evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' For TL training process, we apply the same model setups as DRL approaches, while only setting 4000 steps for training and 250 for evaluation since knowledge transfer save the time for exploration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' 3) Comparing DRL to TL aided approach: In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' 4 we compare the evolution of reward during the training processes among the baseline, DRL approach (proposed in Section IV-A), and TL approaches when transferred from source agent with low and high similarity (proposed in Section IV-B and IV-C), respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' For DRL, we present the first 4000 step, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=', the same training time as TL approaches with solid line and the rest training curve with dashed line.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' 4, the distributed DRL approach learns to achieve similar reward as baseline after a lengthy exploration phase, while both TL approaches start with much higher start compared to DRL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' After a short fine-tuning period, the TL approaches outperform the baseline with higher robustness, especially during the period with higher traffic demands and strong inter-cell interference where baseline has sharp performance degradation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Besides, in comparison between the TL from agents with different similarity measure, we observe that with higher similarity, TL provides higher start at the early stage of training, while both of them converge to similar performance after the training converges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' For performance evaluation, we compare the statistical results on minimum per slice throughput satisfaction level and maximum per slice delay, respectively, among all cells among the methods baseline, distributed DRL and the proposed TL approach after convergence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' 5 illustrated the empirical complementary cumulative distribution function (CDF) which equals 1 − FX(x) where FX(x) is the CDF of minimum per slice throughput satisfaction level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' We observe that the TL approach provides the best performance comparing to others by achieving only about 12% fail to satisfy 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='95 of the requirement, while converged DRL and baseline conclude 19% and 25% failure rate respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' By average satisfaction level, the TL approach conclude 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='92 while DRL and baseline only provide 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='90 and 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Similar observation can be made from Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' 6, which illustrates the CDF of maximum slice delay in ms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' The TL approach provides 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='5 ms maximum average per-slice delay, while DRL achieves 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='7 ms and baseline achieves 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='8 ms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' 4) Inter-agent similarity analysis: We implemented the similarity analysis method introduced in Section IV-B with a VAE model in MLP architecture, both networks of encoder and decoder consist of 3 layers with number of neurons as (64, 24, 4) and (4, 24, 64) respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' To achieve a good trade-off between low dimensional latency space and accurate reconstruction with VAE, we map the original sample x ∈ R17 to the latent variable z ∈ R4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' 7 illustrates the results of inter-agent similarity analysis as a metric of distance measure proposed in (7).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' It shows that our proposed method can distinguish cells with different per- slice service quality requirements and gather the cells with similar joint state-reward distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' 5) Dependence of TL performance on distance measure: In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' 8 we compare the benefits of TL in training process by transferring knowledge from source agents with different average inter-agent distance measures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' The TL gains are 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='8 Traffic Mask 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='4 Slice 1 Slice 2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='2 Slice 3 Slice 4 0 25 50 75 100 125 150 175 200 Timestamp0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='9 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='8 Reward 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='7 Baseline 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='6 DRL TL - High Sim 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='5 TL-Low Sim 0 2000 4000 6000 8000 Timestamp1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='9 Complementary 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='6 TL 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='5 Baseline DRL 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='000 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='975 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='950 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='925 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='900 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='875 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='850 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='825 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='800Figure 6: Comparing CDF of maximum slice delay Figure 7: Inter-agent distance measure Figure 8: TL performance gain depending on distance measure derived by comparing the reward to DRL approach at the same training steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' The results show that before 200 steps of TL training, the TL approaches with the lowest distance measure provides about 3% higher gain than the one with the largest distance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' As the training process continues, the gains in all TL approaches increase with local fine-tuning and the difference between transferring from highly similar and less similar agents is getting smaller.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' However, TL from the most similar agent proyvides higher gains for all training steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' 6) Key Takeaways: : We summarized the takeaways from numerical results as follows: All distributed DRL-based approaches achieve better per- slice network service than the traffic-aware baseline after convergence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' However, the TL schemes outperform the conventional DRL approach in terms of convergence rate, initial and converged performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Our propose VAE-based similarity measure well quan- tifies the distance between agents and can be used to suggest a mapping from the defined distance measure to the transfer learning performance gain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' The difference between the gains achieved by TL from the highly similar and the less similar agents is more significant when the number of training steps is low (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=', with limited online training samples).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Although the advantage of transferring from a highly similar agent over a less similar agent decreases when the number of online training steps increases, a slight performance gain is always achieved by transferring knowledge from the most similar source agent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' VI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' CONCLUSION In this paper, we formulated the dynamic inter-slice re- source partitioning problem to optimize the network require- ment satisfaction level of all slices in each cell.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' To tackle the inter-cell interference, we proposed a coordinated MADRL method with the coordination scheme of information sharing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' We proposed a novel integrated TL method to transfer the learned DRL policies among different local agents for accel- erating the policy deployment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' The method is accomplished by a new inter-agent similarity measurement approach and a new knowledge transfer approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' We evaluated the pro- posed solution with extensive simulations in a system-level simulator, where the results show our approach outperforms conventional DRL solutions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' ACKNOWLEDGMENT This work was supported by the German Federal Min- istry of Education and Research (BMBF) project KICK [16KIS1102K].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' REFERENCES [1] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Liu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Ding, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Liu, “A constrained reinforcement learning based approach for network slicing,” in 2020 IEEE 28th International Conference on Network Protocols (ICNP), 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' 1–6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' [2] Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Liu, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Han, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Zhang, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Wang, “DeepSlicing: Deep reinforce- ment learning assisted resource allocation for network slicing,” in IEEE GLOBECOM, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' 1–6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' [3] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Zhao, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Liang, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Niyato, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Pei, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Wu, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Jiang, “Deep reinforcement learning for user association and resource allocation in heterogeneous cellular networks,” IEEE Transactions on Wireless Communications, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' 18, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' 5141–5152, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' [4] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Shao, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Li, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Zhao, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Zhang, “Graph attention network-based drl for network slicing management in dense cellular networks,” IEEE WCNC, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' 1–6, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' [5] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Pan and Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Yang, “A survey on transfer learning,” IEEE Transac- tions on knowledge and data engineering, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' 22, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' 10, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' 1345– 1359, 2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' [6] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Nguyen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=', “Transfer learning for future wireless networks: A comprehensive survey,” arXiv preprint arXiv:2102.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='07572, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' [7] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Wang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Lin, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Tian, and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Si, “Transfer learning promotes 6g wireless communications: recent advances and future challenges,” IEEE Transactions on Reliability, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' [8] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Parera, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Liao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=', “Transfer learning for tilt-dependent radio map prediction,” IEEE Transactions on Cognitive Communications and Networking, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' 6, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' 2, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' 829–843, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' [9] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Zhuang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Qi, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Duan, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Xi, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Zhu, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Zhu, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Xiong, and Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' He, “A comprehensive survey on transfer learning,” Proceedings of the IEEE, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' 109, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' 1, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' 43–76, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' [10] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Taylor and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Stone, “Transfer learning for reinforcement learning domains: A survey,” J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Mach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Learn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Res.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=', vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' 10, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' 1633–1685, 2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' [11] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Zhu, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Lin, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Zhou, “Transfer learning in deep reinforcement learning: A survey,” CoRR, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' abs/2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='07888, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Available: https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='org/abs/2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='07888 [12] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Nagib, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Abou-zeid, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Hassanein, “Transfer learning- based accelerated deep reinforcement learning for 5G RAN slicing,” IEEE 46th LCN, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' 249–256, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' [13] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Mai, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Yao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=', “Transfer reinforcement learning aided distributed network slicing resource optimization in industrial IoT,” IEEE Trans- actions on Industrial Informatics, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' [14] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Hu, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Liao, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Liu, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Wellington, and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Carle, “Inter-cell slicing resource partitioning via coordinated multi-agent deep reinforcement learning,” in IEEE ICC, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' [15] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Cavalcante, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Liao, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Sta´nczak, “Connections between spectral properties of asymptotic mappings and solutions to wireless network problems,” IEEE Transactions on Signal Processing, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' 67, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' 2747–2760, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' [16] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Fujimoto et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=', “Addressing function approximation error in Actor- Critic methods,” ArXiv, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' abs/1802.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='09477, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' [17] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Sohn, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Lee, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Yan, “Learning structured output representation using deep conditional generative models,” in NIPS, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' [18] Nokia Siemens Networks, White paper: Self-organizing network (SON): Introducing the Nokia Siemens networks SON suite-an efficient, future- proof platform for SON.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Technical report, October, 2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' [19] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Meinil¨a, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Ky¨osti, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Hentil¨a, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' J¨ams¨a, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Suikkanen, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Kunnari, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Narandˇzi´c, Wireless World Initiative New Radio - Winner+.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' Technical report, 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='0 CDF 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='9 1 Complementary 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='6 TL 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='5 Baseline DRL 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='0010 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='0012 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='0014 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='0016 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='0018 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='002 Max Slice Delay [in s]2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='175 m 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='150 4 5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='125 6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='100 7 080 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='075 6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='050 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='025 2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='000 1 2 5 6 > 10 11 1227 26 [in 25 Gain 24 after 100 steps TL after 200 steps 23 after 500 steps after1000 steps 22 after 2000 steps 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='003 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='011 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='081 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
+page_content='117 Distance Measure' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'}
diff --git a/u9FAT4oBgHgl3EQfiB2Y/content/2301.08597v1.pdf b/u9FAT4oBgHgl3EQfiB2Y/content/2301.08597v1.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..4c576e327403f768ea6db8d300a8fc9a040f7225
--- /dev/null
+++ b/u9FAT4oBgHgl3EQfiB2Y/content/2301.08597v1.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:74e000c2ae85ed6b7f4228981a7607b0490cbc86d981179a0418659290bd91b2
+size 1263826
diff --git a/u9FAT4oBgHgl3EQfiB2Y/vector_store/index.pkl b/u9FAT4oBgHgl3EQfiB2Y/vector_store/index.pkl
new file mode 100644
index 0000000000000000000000000000000000000000..1172a4de01de46e248a4c2a5b4d8cccb131ff563
--- /dev/null
+++ b/u9FAT4oBgHgl3EQfiB2Y/vector_store/index.pkl
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:72f0b5c0502f52c23d02c5350308728b92d11bec2ebecc8a6f964b2f89272fe4
+size 388396
diff --git a/uNE5T4oBgHgl3EQfLQ6L/content/tmp_files/2301.05472v1.pdf.txt b/uNE5T4oBgHgl3EQfLQ6L/content/tmp_files/2301.05472v1.pdf.txt
new file mode 100644
index 0000000000000000000000000000000000000000..5e14529883aba2a7b848f58b8232d89534d447fe
--- /dev/null
+++ b/uNE5T4oBgHgl3EQfLQ6L/content/tmp_files/2301.05472v1.pdf.txt
@@ -0,0 +1,1748 @@
+arXiv:2301.05472v1 [math.AP] 13 Jan 2023
+EXISTENCE OF SOLUTIONS TO A CLASS OF ONE-DIMENSIONAL MODELS FOR
+PEDESTRIAN EVACUATIONS∗
+BORIS ANDREIANOV† AND THEO GIRARD ‡
+Abstract.
+In the framework inspired by R. L. Hughes model (Transp.
+Res.
+B, 2002) for pedestrian evacuation in a
+corridor, we establish existence of a solution by a topological fixed point argument. This argument applies to a class of models
+where the dynamics of the pedestrian density ρ (governed by a discontinuous-flux Lighthill,Whitham and Richards model
+ρt + (sign(x − ξ(t))ρv(ρ))x = 0 ) is coupled via an abstract operator to the computation of a Lipschitz continuous “turning
+curve” ξ. We illustrate this construction by several examples, including the standard Hughes’ model with affine cost, and either
+with open-end conditions or with conditions corresponding to panic behaviour with capacity drop at exits. Other examples put
+forward versions of the Hughes model with inertial dynamics of the turning curve and general costs.
+Key words.
+crowd dynamics, pedestrian evacuation, Hughes’ model, capacity drop, existence, Schauder fixed-point,
+admissible solution, discontinuous-flux conservation law, memory, relaxation
+MSC codes. 35L65, 47H10
+1. Introduction.
+1.1. The Hughes model and its variants. The Lighthill,Whitham and Richards (LWR) model for
+traffic introduced in [18] and in [20] consists in a conservation law for the vehicule density ρ with a concave
+positive flux ρv(ρ):
+(1.1)
+�ρt + [ρv(ρ)]x
+=
+0
+ρ(t = 0, x)
+=
+ρ0(x).
+Here, we can suppose that the density ρ takes its values in [0, 1] and v stands for the speed of the traffic. This
+model can be seen as the mass conservation equation where velocity v depends only on the traffic density
+ρ. One frequently chooses v(ρ) = 1 − ρ up to a multiplicative constant representing the maximal velocity.
+This describes a transport of the initial density of agents ρ0 at t = 0 towards x = +∞ where the speed is
+decreasing when the density of agents is increasing.
+Then, in [17], Hughes proposed a model of pedestrian evacuation as a system of two equations on ρ and
+φ which is known as Hughes’ model. In the multi-dimensional model, ρ is the density of pedestrians with
+respect to time t and space x. The dynamics of ρ is governed by LWR conservation laws with direction
+field oriented towards the exits of a bounded domain Ω. In order to prescribe the direction towards the exit
+preferred by a pedestrian at location x at a time t, Hughes defines φ(t, x), the “potential field” satisfying an
+eikonal equation. The potential φ is zero on the exits located on ∂Ω. A pedestrian would then choose to
+“descend the gradient” of this potential in order to leave the domain Ω by these exits. Theory of the Hughes’
+model is yet incomplete, even in one space dimension. In the 1D case, the model of [17] takes the form:
+(1.2a)
+(1.2b)
+(1.2c)
+(1.2d)
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ρt + [sign(−∂xφ)ρv(ρ)]x = 0
+ρ(t, x = ±1) = 0
+|∂xφ| =
+1
+v(ρ)
+φ(t, x = ±1) = 0.
+This problem (1.2) is set up in a corridor with two exits; upon renormalization, we assumed that Ω = (−1, 1)
+and that the exits are located at x = ±1. At t = 0 the pedestrians are distributed with a given density ρ0
+defined in [−1, 1] and at t > 0, the pedestrians want to leave the corridor by either one of the exits (as if a
+∗Submitted to the editors DATE.
+†Institut Denis Poisson CNRS UMR 7013, Université de Tours, Université d’Orléans, Parc Grandmont, 37200 Tours, France
+and Peoples’ Friendship University of Russia (RUDN University) 6 Miklukho-Maklaya St, Moscow, 117198, Russian Federation
+(Boris.Andreianov@lmpt.univ-tours.fr, https://www.idpoisson.fr/andreianov/).
+‡Institut Denis Poisson, Université de Tours, Parc Grandmont, 37200 Tours, France (theo.girard@lmpt.univ-tours.fr).
+1
+
+2
+B. ANDREIANOV, T. GIRARD
+fire alarm starts ringing at t = 0). The pedestrians move forward (with the positive flux ρ �→ +ρv(ρ)) or
+backward (with ρ �→ −ρv(ρ) ) depending of the sign of ∂xφ. This results in (1.2a) being a discontinuous
+flux LWR conservation law. The sign of ∂xφ is prescribed by the eikonal equation (1.2c) where c(ρ) =
+1
+v(ρ)
+is a cost function that is high where the crowd is slow. Consequently, the pedestrians tend to avoid those
+“congested” regions.
+The Dirichlet boundary condition (1.2b) on the density ρ is understood in the Bardos-LeRoux-Nédélec sense
+standard for scalar conservation laws; it is shown in [5, Sect. 3] that upon extending ρ0 by the value zero on
+R\[−1, 1], one can replace the initial-boundary value problem (1.2a)-(1.2b) with ρ0 : (−1, 1) −→ [0, 1] by the
+pure intitial-value problem for (1.2a) with the extended datum ρ0 : R −→ [0, 1] (the extension means that
+ρ0, now defined on R, is supported in [−1, 1]). We adopt this viewpoint and require, throughout the paper,
+(1.3)
+ρ0 ∈ L∞(R; [0, 1]),
+ρ(x) = 0 for x /∈ [−1, 1];
+note that being compactly supported, ρ0 ∈ L1(R). Assumption (1.3) for the conservation law (1.2a) set up
+in the whole space can be seen as “open-end condition” at exits; we refer to Section 4 for models with more
+involved exit behavior.
+In [13], the 1D Hughes’ model (1.2) has been reformulated in terms of a “turning curve” ξ(t) instead of the
+potential φ. Following the turning curve approach, our prototype model in the sequel will be:
+(1.4a)
+(1.4b)
+
+
+
+
+
+ρt + [sign(x − ξ(t))ρv(ρ)]x = 0
+� ξ(t)
+−1
+c(ρ(t, x)) dx =
+� 1
+ξ(t)
+c(ρ(t, x)) dx.
+with ρ defined for t ∈ [0, T ], T > 0, and x ∈ R and with initial datum of the form (1.3). Here c denotes
+a generic cost function. It is proven in [13] that we can equivalently consider either the Hughes’ model
+potential equation (1.2c)-(1.2d) or the reformulated problem (1.4b) with the cost function c(ρ) =
+1
+v(ρ).
+However, here, we will consider a cost verifying the following conditions:
+(1.5)
+
+
+
+c ∈ W 1,∞([0, 1]),
+∀ρ ∈ [0, 1], c(ρ) ≥ 1,
+c is increasing on [0, 1].
+In (1.4), ρ is considered to be an entropy solution to (1.4a).
+Such notion of solution with a particular
+attention to the admissibility of the jump of ρ across the turning curve x = ξ(t) was proposed in [13] (we will
+slightly simplify this solution notion). On the other hand, ξ is a pointwise defined solution to (1.4b) whose
+existence in L∞ and uniqueness follows from the intermediate values theorem under the conditions (1.5).
+In this paper, we will consider a class of “turning curve” model’s generalisations, keeping in mind the fact
+that, even in the setting (1.4), little is known about the well-posedness of the Hughes’ model. For notation’s
+sake, we consider f a generic concave positive flux such that f(0) = f(1) = 0 (one can assume f(ρ) = ρv(ρ)
+to recover the LWR model):
+(1.6a)
+(1.6b)
+(1.6c)
+
+
+
+
+
+ρt + [sign(x − ξ(t))f(ρ)]x = 0
+ρ(0, x) = ρ0(x)
+ξ = I(ρ)
+.
+Here I is an abstract operator mapping the density ρ to a turning curve ξ. The problem (1.4) is a particular
+case of (1.6) where I is the solver of the integral equation (1.4b). Stating (1.6b), we mean that ρ0 fulfills
+(1.3) which corresponds to open-end evacuation at exits, as stated above.
+Let us briefly discuss known results on the specific problem (1.4) and its variants. In [13] uniqueness is
+proven for a definition of entropy solutions taking the discontinuity into account but considering ξ as being
+given beforehand (we will revisit this result in Section 2). In [2] global existence for Hughes’ model (with
+c(ρ) =
+1
+v(ρ)) is proven if one assumes that the density at the turning curve is zero for all times. In [5], a
+
+AN EXISTENCE RESULT FOR HUGHES’ MODEL
+3
+uniqueness result in the same setting as this paper assuming moreover the BV regularity of the solutions
+is provided. And in [23], [15] and [16] one can find numerical studies of the model. Proof of existence
+and unicity for the regularized problem can be found in [12]. The Hughes’ model is also revisited with
+different turning curve equation in [10] with numerical simulation. In this paper, the authors introduce a
+regularization by convolution of the density named the subjective density. We also use the same type of idea
+when applying our main result in the case of a general cost function c. The only general (with respect to
+the choice of the initial data) existence result is contained in [5], where solutions with BVloc regularity away
+from the turning curve were constructed via a well-chosen many-particle approximation. The result of [5] for
+problem (1.4) is limited to the case of an affine cost c(ρ) = 1 + αρ. Our result for the original setting (1.4)
+will also be limited to the affine cost case. But we provide a shorter and less specific argument, compared to
+the many-particle approximation of [5], also we require fewer assumptions on the velocity profile v compared
+to [5]. The fixed-point approach we develop appears to be rather flexible since it permits to handle several
+models of the form (1.6). We also adapt the arguments to more realistic, in the setting of crowd evacuation,
+exit behavior of the “capacity drop” kind (cf. [8, 7]). However, we highlight the fact that our approach
+is restricted to situations where Lipschitz continuity of the turning curve ξ is guaranteed for the model at
+hand, which appears to be a strong restriction on its applicability; this restriction also appears in [5].
+1.2. Abstract framework and general results. In this paper we propose an existence result elabo-
+rated through a fixed-point argument to problem (1.6) under abstract assumptions on I. Roughly speaking,
+we require that I maps any admissible solution ρ of the equation (1.6a) to a Lipschitz continuous turning
+curve ξ. Furthermore, the Lipschitz constant of those turning curves must be uniformly bounded for any ρ.
+We stress that the Hughes’ model with affine cost c(ρ) = 1 + αρ enters our abstract framework. However, it
+is not clear whether, for general costs satisfying (1.5), the required Lipschitz bounds hold true. This issue
+for the original Hughes’ model is left for further investigation. Models with more regular dependence of ξ
+on ρ can be considered as well, including memory and relaxation effects, and for these models the Lipschitz
+continuity of ξ is justifiable for general costs.
+First, let’s introduce some notations that will be used throughout the whole paper.
+• We denote {x < ξ(t)} := {(t, x) ∈ [0, T ] × R s.t. x < ξ(t)}. Analogously, we use {x = ξ(t)} and {x > ξ(t)}.
+• For any r > 0, we write
+BW 1,∞(0, r) :=
+�
+ξ ∈ W 1,∞((0, T ), R) s.t. ∥ ˙ξ∥∞ + ∥ξ∥∞ ≤ r
+�
+.
+• Analogously, we write BL1(0, r) for the set of ρ ∈ L1((0, T ) × R, [0, 1]) such that ∥ρ∥L1((0,T )×R) ≤ r.
+In problem (1.6), ρ is taken as an admissible solution to the discontinuous flux LWR equation (1.6a). On
+the way of proving the existence result, we propose and use a slightly simpler notion of admissible solution
+for this equation than the notion used in [13], [2] and [1]. Those notions of solution are equivalent.
+Definition 1.1. Let ξ ∈ W 1,∞((0, T )). Let ρ0 ∈ L1(R, [0, 1]). Let f be a concave positive flux such that
+f(0) = 0 = f(1) and F(t, x, ρ) := sign(x − ξ(t))f(ρ).
+We say that ρ ∈ L1((0, T ) × R, [0, 1]) is an admissible solution to:
+(1.7)
+�ρt + F(t, x, ρ)x = 0
+ρ(t = 0, ·) = ρ0(·)
+if
+• For all φ ∈ C∞
+c ((0, T ) × R),
+(1.8)
+��
+Ω
+ρφt + F(t, x, ρ)φx dt dx = 0
+• For all positive φ ∈ C∞
+c ({x < ξ(t)} (resp. φ ∈ C∞
+c ({x > ξ(t)}) ), for all k ∈ [0, 1],
+(1.9)
+−
+��
+Ω
+|ρ − k| φt + q(ρ, k)φx dt dx −
+�
+R
+|ρ0 − k|φ(0, x) dx ≤ 0
+
+4
+B. ANDREIANOV, T. GIRARD
+where we set
+(1.10)
+q(u, v) := sign(u − v) [F(t, x, u) − F(t, x, v)]
+Note that the notion of solution makes sense for arbitrary initial datum ρ0 ∈ L1(R, [0, 1]) but in order to
+keep consistency with the standard Hughes’ setting, we will restrict our attention to data ρ0 that fulfill (1.3).
+Remark 1.2. Note that in the above definition, no admissibility condition is prescribed at {x = ξ(t)}. Only
+the conservativity (the Rankine-Hugoniot condition following from (1.8)) is required at the location of the
+turning curve.
+Remark 1.3. Definition 1.1 implies that ρ ∈ C0([0, T ], L1(R)). This is proved by an adapted version of the
+one in [9]. Such an adapted proof can be found in [21]. Remembering this fact makes sense of the notation
+ρ(t, ·) without ambiguity.
+For a given (and fixed) ξ ∈ W 1,∞((0, T )), it is shown this notion of solution gives a well-posed discontinuous
+flux conservation law in L1((0, T ) × R) when ρ0 belongs to L1(R; [0, 1]). We then define the solver operator:
+(1.11)
+S0 :
+�
+W 1,∞((0, T )) −→ L1((0, T ) × R)
+ξ �→ ρ.
+This operator S0 maps ξ a turning curve to S0(ξ) = ρ the unique admissible, in the sense of Definition 1.1,
+solution to (1.6a)-(1.6b) set up in the whole one-dimensional space.
+Remark 1.4. The uniqueness of a solution in the sense of Definition 1.1 still holds for
+F(t, x, p) := 1{x<ξ(t)}fL(p) + 1{x>ξ(t)}fR(p)
+where fL (resp. fR) is a convex negative (resp. concave positive) flux such that fL(0) = fL(1) = fR(0) =
+fR(1) = 0. These are the core properties of the fluxes on which rely our proof. For instance, modeling a
+slanted corridor, we can consider fL,R(ρ) := vL,R ρ(1−ρ) where vL and vR are positive constants accounting
+for the difference in speed for a pedestrian when moving to the right or the left exit.
+We now present the notion of solution used for the generalized Hughes’ model given by system (1.6). Recalling
+Remark 1.3, it makes sense for the operator equation (1.6c) to be verified for all t ∈ [0, T ]. In fact, we will
+require that ξ ∈ W 1,∞((0, T )) in order to obtain our main result. We then use the classical embedding result
+to identify ξ with a unique element of C0([0, T ]).
+Definition 1.5. Consider I : L1((0, T ) × R) −→ C0([0, T ]). We say that (ρ, ξ) is a solution to generalized
+Hughes’ model (1.6) if ρ is a solution to (1.6a)-(1.6b) in the sense of Definition 1.1 and moreover, the
+equality ξ = I(ρ) holds in C0([0, T ]).
+Notice that such a solution can be seen as a fixed point of the composed operator S0◦I. In order to prove the
+existence of a solution, we prove a variant of the Schauder’s fixed point Theorem (see [25]). To be specific,
+denoting by I : ρ �→ ξ the operator that serves to compute the interface and by D : ξ �→ ρ the one that
+serves to compute the density, we prove the following statement:
+Lemma 1.6. Let (X, ∥·∥X) be a Banach space, (Y, ∥·∥Y ) a metric space and K a compact subset of Y . Take
+D : (K, ∥ · ∥Y ) −→ (X, ∥ · ∥X) a continuous operator. Assume there exists B a bounded closed convex subset
+of X such that:
+I : (B, ∥ · ∥X) −→ (K, ∥ · ∥Y ) is a continuous operator
+(1.12a)
+D ◦ I(B) ⊂ B
+(1.12b)
+Then D ◦ I admits a fixed point in B.
+Remark 1.7. We stress that the assumption (1.12a) implies that, on the subset B, I takes its values in K,
+making D ◦ I well-defined on B.
+
+AN EXISTENCE RESULT FOR HUGHES’ MODEL
+5
+The assumptions of Lemma 1.6 permit us to formulate sufficient conditions for the existence of a solution in
+the sense of Definition 1.5. Specifically, the use of the sets BW 1,∞(0, r) (as K) and C0([0, T ]) (as Y ) is the
+key to the application of Schauder fixed-point argument to S0 ◦ I under reachable assumptions on I in the
+Hughes’ model framework.
+We prove in Section 2 the following proposition saying that S0 is continuous. This continuity matches with
+the one required for the operator D in the above lemma.
+Proposition 1.8. Let ρ0 verify (1.3). If f satisfies the non-degeneracy condition:
+(1.13)
+meas
+�
+x ∈ [−∥ρ∥∞; |ρ∥∞] s.t. f ′(x) = 0
+�
+= 0
+then the solver operator S0 : (W 1,∞((0, T ), ∥ · ∥∞) −→ (L1((0, T ) × R), ∥ · ∥L1((0,T )×R)) is continuous.
+Combining previous results, we state the main result of this paper:
+Theorem 1.9. Let ρ0 verify (1.3). Let B a convex closed bounded subset of L1((0, T ) × R) and
+I : (B, ∥ · ∥L1((0,T )×R)) −→
+(C0([0, T ], R), ∥ · ∥∞)
+be a continuous operator. Assume that f verifies (1.13). If there exists r > 0 such that:
+I(B) ⊂ BW 1,∞(0, r)
+(1.14a)
+∀ξ ∈ BW 1,∞(0, r), the unique admissible solution to ρt + [sign(x − ξ(t))f(ρ)]x = 0 is in B
+(1.14b)
+then there exists (ρ, ξ) a solution to the problem (1.6) in the sense of Definition 1.5.
+Remark 1.10. One can interpret B as the set where one looks for solutions to (1.6a).
+The central point in order to use this theorem is to construct the set B; in below applications, two different
+choices for B are encountered.
+1.3. Applications. We search for properties of admissible solution in the sense of Definition 1.1 that
+are independent of ξ. These properties, included in the construction of B must guarantee that I(B) verifies
+(1.14a) but also that B is convex, bounded and closed in L1((0, T ) × R). In this subsection, we present three
+applications of Theorem 1.9.
+First, we consider the operator I0 associated to the problem (1.4b) with affine cost function (further detailled
+in Section 3). Let us exhibit the construction of B1 a set satisfying the conditions (1.14b)-(1.14a) for this
+choice of I. Notice that, thanks to the L1-contraction property of the admissible solution ρ that is justified
+within the uniqueness proof in Section 2, we have:
+∀t ∈ [0, T ], ∥ρ(t, ·)∥L1(R) ≤ ∥ρ0∥L1(R)
+⇒ ∥ρ∥L1([0,T ]×R) ≤ T ∥ρ0∥L1(R)
+(1.15)
+Furthermore, we prove that for a certain fixed constant C > 0 (which value will be made precise later), for
+any ξ ∈ W 1,∞, a weak solution to (1.6a) in the sense (1.8) verifies (see Lemma 3.2 and also [5]):
+(1.16)
+∀a, b ∈ R, ∀s, t ∈ [0, T ],
+�����
+� b
+a
+ρ(t, x) − ρ(s, x) dx
+����� ≤ C|t − s|.
+Finally, considering an inital datum 0 ≤ ρ0 ≤ 1, we set:
+(1.17)
+B1 =
+�
+ρ ∈ BL1(0, T ∥ρ0∥L1) s.t. 0 ≤ ρ ≤ 1 and ρ verifies (1.16)
+�
+.
+Applying Theorem 1.9 with B1 given by (1.17) we get:
+Proposition 1.11. Assume that I0 : B1 −→
+C0([0, T ], R) is the operator associated with equation (1.4b)
+with affine cost c(ρ) = 1 + αρ. If f verifies (1.13), then there exists (ρ, ξ) a solution to the problem (1.4) in
+the sense of Definition 1.5.
+
+6
+B. ANDREIANOV, T. GIRARD
+As a second case, we treat Iδ the operator associated with a modified version of equation (1.4b) where ρ is
+replaced by an average density over recent past in equation (1.4b) (see (1.4b’)). This modification is inspired
+by the use of “subjective density” in pedestrian and traffic flows, proposed, e.g., in [10] and [8, 7] (cf. Section 4
+where subjective densities are used to model constrained evacuation at exits); this choice introduces inertia
+effect into agents’ perception of the crowd densities. In that setting, we can prove that the image of Iδ
+is contained in a bounded subset of W 1,∞((0, T )) without requiring the property (1.16). Consequently, we
+recover the global existence result for any cost c verifying (1.5) with the set B2 merely given by:
+B2 =
+�
+ρ ∈ BL1(0, T ∥ρ0∥L1) s.t. 0 ≤ ρ ≤ 1
+�
+.
+As a third example, we consider �Iǫ the operator associated with problem (1.4b) with a relaxed equilibrium,
+modeling, in a way different from Iδ, inertia effect of the interface dynamics. In this case, the set B2 also
+satisfies all the conditions in order to apply Corollary 1.9.
+Finally, another series of applications (which is an extension of all the previous results to models with
+different, phenomenologically relevant behavior of agents in exits) is provided in Section 4.
+1.4. Outline. In Section 2, we prove the main results of this paper, respectively Theorem 1.9 and
+Lemma 1.6, Proposition 1.8. These proofs hold in an abstract framework where the choice of I and B are
+not prescribed. Then, in Section 3, we detail the construction involving the set B1 satisfying the assumptions
+of Theorem 1.9 in the case of I0 being the operator associated with equation (1.4b) with affine cost. We also
+discuss the case of a general cost satisfying (1.5) and solve it for the modified operators Iδ and �Iǫ using the
+set B2. Eventually, in Section 4, we extend Theorem 1.9 in a situation with constrained evacuation at exits
+x = ±1.
+2. Proof of the main result. We first deduce Lemma 1.6 from the Schauder fixed-point theorem.
+Proof of Lemma 1.6. We recall that, thanks to condition (1.12a), D ◦ I is well defined. What’s more, D and
+I are continuous. So D ◦ I is continuous from B into itself. Take any subset A of B. The set I(A) ⊂ K
+is a relatively compact set in (Y, ∥ · ∥Y ). Since D is continuous from (K, ∥ · ∥Y ) into (X, ∥ · ∥X), D ◦ I(A)
+is a relatively compact subset of X. We consequently have D ◦ I a compact operator from B into itself.
+Furthermore B is bounded closed convex subset of a Banach space X.
+We apply Schauder fixed-point
+theorem (see [25]) and conclude to the existence of a fixed point in B.
+In order to apply Lemma 1.6 with D = S0 the solver associated with the notion of solution of Definition
+1.1 ( see (1.11) ), we first need to check that S0 is well defined from W 1,∞((0, T )) into L1((0, T ) × R) when
+∥ρ0∥L1(R) < +∞. This is equivalent to well-posedness for the problem (1.7).
+We prove below that, thanks to the particular choice of fluxes on each side of the turning curve (emphasized
+in Remark 1.4), Definition 1.1 is restrictive enough to grant uniqueness. This notion of solution is however
+less restrictive than the one proposed in [13, 1]. It implies that both notions are equivalent, also the existence
+of such solutions is then directly inherited from the proof found in [1]. Note that one can prove the existence
+result for our notion of solution through the convergence of a finite volume scheme (we do so in Section 4,
+in the context of flux-limited exit behavior at the exits x = ±1).
+Theorem 2.1. Let ρ,ˆρ be two entropy solutions in the sense of Definition 1.1 with initial datum ρ0 (resp.
+ˆρ0). Let Lf be the lipschitz constant of f. If ξ ∈ W 1,∞((0, T )), we have:
+for a.e. t ∈ [0, T ], ∀a, b ∈ R,
+� b
+a
+|ρ(t, x) − ˆρ(t, x)|dx ≤
+� b+Lft
+a−Lft
+|ρ0(x) − ˆρ0(x)|dx.
+In particular, there exists at most one entropy solution associated to a given initial datum ρ0.
+In order to prove this Theorem, we introduce notation for the right and left strong traces of ρ along a
+Lipschitz curve ξ. Let ξ ∈ W 1,∞((0, T ), R). Then, γLρ(t) ∈ L∞((0, T )) (resp. γRρ(t) ) is such that, for any
+φ ∈ C0([0, 1]),
+ess lim
+ǫ→0+
+1
+ǫ
+� T
+0
+� ξ(t)
+ξ(t)−ǫ
+|φ(ρ(t, x)) − φ(γLρ(t))| dx dt = 0
+
+AN EXISTENCE RESULT FOR HUGHES’ MODEL
+7
+�
+respectively, ess lim
+ǫ→0+
+1
+ǫ
+� T
+0
+� ξ(t)+ǫ
+ξ(t)
+|φ(ρ(t, x)) − φ(γRρ(t))| dx dt = 0
+�
+The existence of those traces is proven in [24].
+Remark 2.2. Generalization of the approach of the present paper to general cost function c, for the original
+Hughes’ model, may require going below the Lipschitz regularity of ξ. In this respect, let us point out that
+extension of the above uniqueness claim to W 1,1 regularity of ξ is feasible, while weakening the regularity of
+ξ even more presents a serious difficulty for the theory of discontinuous-flux conservation laws [4].
+Proof of Theorem 2.1. Remembering Remark 1.4 and for a more comprehensive presentation of the proof,
+we denote fR = f and fL = −f.
+To main idea of the proof consists of using Kruzkhov’s doubling variable technique (see [14]) on each side
+of the curve {x = ξ(t)}. Since ξ is Lipschitz continuous we can join both pieces getting left and right traces
+along this turning curve, following the general approach as in [4, 8]. We get, for any φ ∈ D+,
+(∗)
+−
+��
+Ω
+|ρ − ˆρ|φt + q(ρ, ˆρ)φx ≤
+� T
+0
+φ(t, ξ(t)) [qR(γRρ, γRˆρ) − qL(γLρ, γLˆρ)]
+where qL,R(ρ, ˆρ) := sign(ρ − ˆρ)
+�
+fL,R(ρ) − fL,R(ˆρ) − ˙ξ(t)(ρ − ˆρ)
+�
+.
+On another side, using traces’ existence, we also recover from (1.8) the Rankine-Hugoniot condition:
+(∗∗ρ)
+for a.e. t ∈ (0, T ), fR(γRρ(t)) − ˙ξ(t)γRρ(t) = fL(γLρ(t)) − ˙ξ(t)γLρ(t)
+We also have the analogous relation for ˆρ that we denote (∗∗ˆρ).
+Fix t ∈ (0, T ) such that (∗∗ρ) and (∗∗ˆρ) are true. We denote the set of values for γLρ (resp. γRρ) that verify
+(∗∗ρ):
+ΓL,R :=
+�
+a ∈ R s.t. ∃b ∈ R, fL,R(a) − ˙ξ(t)a = fL,R(b) − ˙ξ(t)b
+�
+.
+Due to the particular choice of the pair of fluxes (fL, fR), those sets are non-empty. Its geometries are
+pictured below.
+ΓR
+ΓL
+y = fL(x) − ˙ξ(t)x
+y = fR(x) − ˙ξ(t)x
+Recalling the properties of fL and fR emphasized in Remark 1.4 and using the signs of f ′
+L and f ′
+R, we let the
+reader verify that, for any ˙ξ(t), x �→ fR(x) − ˙ξ(t)x has the same monotonicity on ΓR as x �→ fL(x) − ˙ξ(t)x
+on ΓL.
+Consequently, if (γLρ, γRρ) verifies (∗∗ρ) and (γL ˆρ, γRˆρ) verifies (∗∗ˆρ),
+• sign(γRρ − γRˆρ) sign
+�
+fR(γRρ) − fR(γRˆρ) − ˙ξ(t)(γRρ − γRˆρ)
+�
+= sign(γLρ − γL ˆρ) sign
+�
+fL(γLρ) − fL(γL ˆρ) − ˙ξ(t)(γLρ − γLˆρ)
+�
+• (∗∗ρ)-(∗∗ˆρ) implies that
+fR(γRρ) − fR(γRˆρ) − ˙ξ(t)(γRρ − γRv) = fL(γLρ) − fL(γLˆρ) − ˙ξ(t)(γLρ − γLˆρ).
+
+8
+B. ANDREIANOV, T. GIRARD
+Therefore we have:
+for a.e. t ∈ (0, T ), qR(γRρ, γRˆρ) − qL(γLρ, γLˆρ) = 0.
+Consequently, from (∗), we recover the global Kato’s inequality: for any φ ∈ D+(Ω),
+−
+��
+|ρ − ˆρ|φt + q(ρ, ˆρ)φx ≤ 0.
+The remaining arguments are identical to the classical framework of Kruzkhov. Integrating on the trapezoid
+1[0,t](s)1[a−Lf(t−s),b+Lf(t−s)](x), Lf being the Lipschitz constant of f, we get the localized L1 contraction
+property:
+(2.1)
+� b
+a
+|ρ(t, x) − ˆρ(t, x)|dx ≤
+� b+Lft
+a−Lft
+|ρ(0, x) − ˆρ(0, x)|dx.
+Consequently, the solver operator S0 is well defined from W 1,∞((0, T )) into L1((0, T )×R). In order to apply
+Lemma 1.6 with D = S0 :
+�
+W 1,∞((0, T )), ∥ · ∥∞
+�
+−→
+�
+L1((0, T ) × R), ∥ · ∥L1((0,T )×R)
+�
+, we also show the
+continuity of this operator. Let’s denote for any a < b ∈ R, s < t ∈ [0, T ], the trapezoid:
+(2.2)
+T s,t
+a,b :=
+�
+(τ, x) ∈ (0, T ) × R s.t. τ ∈ [s, t], x ∈ (a + (τ − s)Lf , b − (τ − s)Lf)
+�
+,
+where Lf is the Lipschitz constant of f. We isolate the following useful lemma that comes from (2.1).
+Lemma 2.3. Let ρ0 satisfy (1.3), ξ ∈ W 1,∞((0, T )) and ρ be the entropy solution in the sense of Definition
+1.1 to (1.7) on (0, T ) × R. Denote ˆρ the Kruzhkov entropy solution on (s, t) × R to 1
+�
+ˆρt + f(ˆρ)x = 0
+ˆρ(s, ·) = ρ(s, ·)1(a,b)(·).
+Then, for any a < b ∈ R, s < t ∈ [0, T ], there holds
+(2.3)
+T s,t
+a,b ⊂ {x > ξ(t)} =⇒ ρ = ˆρ a.e. on T s,t
+a,b .
+Proof. This lemma immediatly follows from (2.1).
+We now prove Proposition 1.8 using this lemma.
+Proof of Proposition 1.8. Consider (ξn)n∈N and ξ ∈ W 1,∞((0, T )) such that ∥ξn − ξ∥∞ −→ 0. We denote
+ρn := S0(ξn). Let K a compact subset of {x > ξ(t)}. Let ǫ > 0 such that K ⊂ {x > ξ(t) + ǫ}.
+We cover K by a finite number of trapezoids of the form (2.2). Without loss of generality we can suppose
+that each trapezoid is contained in {x > ξ(t) + ǫ}:
+K ⊂
+�
+i∈I
+T si,ti
+ai,bi ⊂ {x > ξ(t) + ǫ} , Card(I) < +∞.
+Since ∥ξn − ξ∥∞ −→ 0, for any ǫ > 0, there exists n0 ∈ N such that ∀t ∈ [0, T ], n ≥ n0 ⇒ |ξn(t) − ξ(t)| ≤ ǫ.
+This implies ξn(t) ∈ [ξ(t) − ǫ; ξ(t) + ǫ]. Then,
+∀x ∈ R\[ξ(t) − ǫ; ξ(t) + ǫ] , sign(x − ξn(t)) = sign(x − ξ(t)).
+(2.4)
+Then, for such a n0, for any n ≥ n0, each trapezoid T si,ti
+ai,bi ⊂ {x > ξn(t)}. Using Lemma 2.3, for any n ≥ n0,
+ρn is equal almost everywhere in T si,ti
+ai,bi to the Kruzhkov entropy solution of:
+�
+ρt + f(ρ)x = 0
+ρ(si, ·) = ρn(si, ·)1(ai,bi)(·).
+1Here ρ(s, ·) is understood in view of s being a Lebesgue’s point of ρ ∈ L∞((0, T), L1(R)). Recalling Remark 1.3, this is in
+fact true for any s ∈ [0, T].
+
+AN EXISTENCE RESULT FOR HUGHES’ MODEL
+9
+We are now in a position to apply the averaging compactness lemma (see Theorem 5.4.1 in [19]) on the
+trapezoid T s0,t0
+a0,b0 . We get a subsequence (ρnk)k∈N that converges in L1(T s0,t0
+a0,b0 ). We then apply the averaging
+compactness lemma with (ρnk)k on T s1,t1
+a1,b1 . Repeating this process for each i ∈ I, we recover a subsequence
+(ρnj)j that converges in L1(�
+i∈I T si,ti
+ai,bi ). Then (ρnj)j converges in L1(K).
+To conclude, we point out that this reasoning holds for any K ⊂ {x > ξ(t)}. This is also true for compact
+subsets of {x < ξ(t)}. Since ξ is Lipschitz, meas({x = ξ(t)}) = 0. Consequently there exists a subsequence
+(ρnk) that converges almost everywhere on (0, T ) × R and in L1
+loc((0, T ) × R). Moreover, we have ρnk −→ ρ
+in L1((0, T ) × R) because for [a, b] ∩ [−1, 1] = ∅, ρn = 0 on T 0,T
+a,b
+, due to the choice of ρ0 verifying (1.3).
+Now, ρ is actually S0(ξ). Indeed, recall that ρ has no admissibility condition to satisfy on {x = ξ(t)} beyond
+the Rankine-Hugoniot relation. Then, we can pass to the limit in the entropy inequalities (1.9) (where, for
+n large enough, the support of the test function does not intersect the curve {x = ξn(t)} for t ∈ [0, T ]) and
+pass to the limit in (1.8) by dominated convergence.
+This reasoning can be reproduced for any subsequence of (ρn)n. Thanks to a classical argument of compacity,
+if any converging subsequence (S0(ξnk))k∈N converges to S0(ξ), the whole sequence (S0(ξn))n converges in
+L1 to S0(ξ). So S0 : (W 1,∞((0, T )), ∥ · ∥∞) −→ (L1((0, T ) × R), ∥ · ∥L1((0,T )×R)) is continuous.
+We now combine all the previous results to get existence of a solution in the sense of Definition 1.5.
+Proof of Theorem 1.9. Suppose there exists r > 0 such that (1.14a)-(1.14b) are verified.
+Using the notations of Theorem 1.6 we take:
+• Y = (C0([0, T ]), ∥ · ∥∞)
+• X = (L1((0, T ) × R), ∥ · ∥L1((0,T )×R))
+• K as the compact set of C0([0, T ]) obtained as the image of BW 1,∞(0, r) under the standard embedding.
+Using Proposition 1.8 and Theorem 2.1, we know that S0 : (K, ∥ · ∥Y ) −→ (X, ∥ · ∥X) is well defined and
+continuous. Further, notice that condition (1.14a) is equivalent to (1.12a) and that condition (1.14b) implies
+(1.12b). We are now in a position to use Lemma 1.6. We conclude to the existence of a solution to (1.6) in
+the sense of Definition 1.5.
+3. Lipschitz continuity of the turning curve:
+examples. In this section, we will enumerate
+examples of the abstract problem (1.6)
+
+
+
+
+
+ρt + [sign(x − ξ(t))f(ρ)]x = 0
+ρ(0, x) = ρ0(x)
+ξ = I(ρ),
+where we can construct a set B such that the prescribed operator I satisfies the required properties in order
+to apply Theorem 1.9; this includes the original Hughes’ model (1.4) with affine costs and its modifications,
+taking into account time-inertia effects and allowing for general costs. Note that further examples, with
+modified exit conditions, are considered in Section 4. For such examples, we exhibit the construction of this
+set. Consequently, we get existence of a solution in the sense of Definition 1.5 in those situations.
+3.1. Hughes’s model with affine cost. We first consider the model (1.4):
+
+
+
+
+
+ρt + [sign(x − ξ(t))ρv(ρ)]x = 0
+� ξ(t)
+−1
+c(ρ(t, x))dx =
+� 1
+ξ(t)
+c(ρ(t, x))dx,
+with initial datum satisfying (1.3) where we choose, for some α > 0,
+(3.3)
+c(p) = 1 + αp.
+First, let us recall the definition of the set B1 constructed in the introduction:
+(1.17)
+B1 =
+�
+ρ ∈ BL1(0, T ∥ρ0∥L1) s.t. 0 ≤ ρ ≤ 1 and ρ verifies (1.16)
+�
+.
+
+10
+B. ANDREIANOV, T. GIRARD
+In this setup, we have the following proposition:
+Proposition 3.1. Assume the cost is given by (3.3). Then the following properties hold:
+1. For any ξ ∈ W 1,∞((0, T )), S0(ξ) ∈ B1.
+2. There exists r
+> 0 such that, for any ρ ∈ B1, there exists a unique solution ξ ∈ BW 1,∞(0, r) to
+(1.4b). We denote I0 the operator that maps ρ ∈ B1 to ξ the unique solution to (1.4b). Consequently,
+this operator is well defined and monovaluated.
+3. I0 : (B1, ∥ · ∥L1((0,T )×R)) −→ (W 1,∞([0, T ]), ∥ · ∥∞) is continuous.
+4. B1 is closed convex and bounded in L1((0, T ) × R).
+Consequently, I0 verifies (1.14a)-(1.14b) for the set B1. We apply Theorem 1.9 and get the desired existence
+of a solution for the problem (1.4) with affine cost (3.3). That proves Proposition 1.11.
+In order to prove of Proposition 3.1, we rely on two lemmas that we chose to isolate in order to use them in
+the other examples.
+Lemma 3.2. Let a, b ∈ R, a < b. Let s, t ∈ [0, T ], s < t. Fix ξ ∈ W 1,∞((0, T )). We denote ρ a solution in
+the sense of Definition 1.1. Then, there exists C > 0, independent of a, b, s, t, ξ and ρ, such that:
+(3.4)
+�����
+� b
+a
+ρ(t, x) − ρ(s, x) dx
+����� ≤ C|t − s|.
+We recall that there’s no ambiguity in considering ρ(t, .) since ρ ∈ C0([0, T ], L1(R)) (see Remark 1.3).
+Proof of Lemma 3.2. Let (κn)n∈N be a mollifier. We set
+Ψ(τ, x) := 1[a,b](x)1[s,t](τ)
+and φ(τ, x) := Ψ ∗ κn(τ, x).
+Using φ as test function in (1.8), making n −→ +∞ we get:
+� b
+a
+ρ(s, x) − ρ(t, x) dx +
+� t
+s
+F(τ, a, ρ(τ, a)) − F(τ, b, ρ(τ, b)) dτ = 0
+Consequently,
+�����
+� b
+a
+ρ(t, x) − ρ(s, x) dx
+����� ≤
+����
+� t
+s
+F(τ, a, ρ(τ, a)) − F(τ, b, ρ(τ, b)) dτ
+���� ≤
+�
+2 sup
+p∈[0,1]
+|f(p)|
+�
+|t − s|
+Lemma 3.3. Let s < t ∈ [0, T ]. Let ξ be a solution to (1.4b). We denote
+¯
+ξ := min(ξ(t), ξ(s)) and ¯ξ :=
+max(ξ(t), ξ(s)). Then
+(3.5)
+2 |ξ(t) − ξ(s)| ≤
+�����
+�
+¯
+ξ
+−1
+c(ρ(t, x)) − c(ρ(s, x)) dx −
+� 1
+¯ξ
+c(ρ(t, x)) − c(ρ(s, x)) dx
+�����
+Proof of Lemma 3.3. We first treat the case ξ(s) ≤ ξ(t).
+We have:
+� ξ(s)
+−1
+c(ρ(s, x)) dx =
+� ξ(t)
+ξ(s)
+c(ρ(s, x)) dx +
+� 1
+ξ(t)
+c(ρ(s, x)) dx
+� ξ(s)
+−1
+c(ρ(t, x)) dx = −
+� ξ(t)
+ξ(s)
+c(ρ(t, x)) dx +
+� 1
+ξ(t)
+c(ρ(t, x)) dx
+If we substract both equalities,
+� ξ(t)
+ξ(s)
+c(ρ(s, x)) + c(ρ(t, x)) dx =
+� ξ(s)
+−1
+c(ρ(s, x)) − c(ρ(t, x)) dx −
+� 1
+ξ(t)
+c(ρ(s, x)) − c(ρ(t, x)) dx
+
+AN EXISTENCE RESULT FOR HUGHES’ MODEL
+11
+On the contrary, if ξ(s) ≥ ξ(t), with an analogous argument we get:
+� ξ(s)
+ξ(t)
+c(ρ(s, x)) + c(ρ(t, x)) dx =
+� ξ(t)
+−1
+c(ρ(t, x)) − c(ρ(s, x)) dx −
+� 1
+ξ(s)
+c(ρ(t, x)) − c(ρ(s, x)) dx
+Using the fact that c ≥ 1 we get:
+2|ξ(t) − ξ(s)| = 2(¯ξ −
+¯
+ξ)
+≤
+� ¯ξ
+¯
+ξ
+c(ρ(s, x)) + c(ρ(t, x)) dx ≤
+�����
+�
+¯
+ξ
+−1
+c(ρ(s, x)) − c(ρ(t, x)) dx −
+� 1
+¯ξ
+c(ρ(s, x)) − c(ρ(t, x)) dx
+�����
+We are now ready to prove Proposition 3.1.
+Proof of Proposition 3.1. First, consider ρ0 satisfying (1.3). Using ˆρ = 0 in (2.1), we prove that for all t in
+[0, T ], ∥ρ(t, ·)∥L1(R) ≤ ∥ρ0∥L1(R). This readily yields:
+∥ρ∥L1([0,T ]×R) ≤ T ∥ρ0∥L1(R).
+(1.15)
+Combining this result with Lemma 3.2, we prove the first assertion of Proposition 3.1.
+Second, fix ρ ∈ B1. We prove existence and uniqueness of ξ ∈ L∞([0, T ]) satisfying (1.4b) for any t ∈ [0, T ].
+Let t ∈ [0, T ], we set:
+Ψ+(a) :=
+� a
+−1
+c(ρ(t, x)) dx, Ψ−(a) :=
+� 1
+a
+c(ρ(t, x)) dx.
+One can notice that, because c > 0, Ψ+ is a continuous strictly increasing function, while Ψ− is continuous
+and strictly decreasing on [−1, 1]. Therefore, a �→ Ψ+(a) − Ψ−(a) is continuous, strictly increasing, negative
+at a = −1 and positive at a = 1. Consequently, there exists only one ˜a ∈ (−1, 1) such that Ψ+(˜a) = Ψ−(˜a).
+This can be done for any t ∈ [0, T ]. Consequently, we get existence and unicity of ξ ∈ L∞.
+We now prove that ξ ∈ W 1,∞([0, T ]). Using Lemma 3.3 we get:
+2 |ξ(t) − ξ(s)| ≤
+�����
+�
+¯
+ξ
+−1
+c(ρ(t, x)) − c(ρ(s, x)) dx −
+� 1
+¯ξ
+c(ρ(t, x)) − c(ρ(s, x)) dx
+�����
+≤ α
+�����
+�
+¯
+ξ
+−1
+ρ(t, x) − ρ(s, x) dx
+����� + α
+����
+� 1
+¯ξ
+ρ(t, x) − ρ(s, x) dx
+����
+And using Lemma 3.2, with the choice (3.3) of the cost, we get:
+2 |ξ(t) − ξ(s)| ≤ 2αC |t − s|
+We conclude that taking r = αC, one guarantees that ξ is always in BW 1,∞(0, r).
+We now prove the continuity of the operator I0. Let’s consider ρ, ρn ∈ B1. Then, for a given t ∈ [0, T ],
+using (1.4b) for both ξ := I0(ρ) and ξn := I0(ρn), we recover:
+� ξ(t)
+ξn(t)
+c(ρ) +
+� ξn(t)
+−1
+c(ρ) −
+� ξn(t)
+−1
+c(ρn) =
+� ξn(t)
+ξ(t)
+c(ρ) +
+� 1
+ξn(t)
+c(ρ) −
+� 1
+ξn(t)
+c(ρn)
+And rearranging the integrals, we get:
+2
+� ξ(t)
+ξn(t)
+c(ρ) =
+� 1
+−1
+[c(ρ) − c(ρn)] sign(x − ξn(t)).
+
+12
+B. ANDREIANOV, T. GIRARD
+Notice that
+� T
+0
+|ξ − ξn| ≤
+� T
+0
+�����
+� ξn(t)
+ξ(t)
+c(ρ)
+����� ≤ 1
+2
+� T
+0
+����
+� 1
+−1
+sign(x − ξn(t)) [c(ρ) − c(ρn)]
+����
+≤ 1
+2
+� T
+0
+� 1
+−1
+|c(ρ) − c(ρn)| ≤ α
+2
+� T
+0
+� 1
+−1
+|ρ − ρn| .
+Consequently, if ∥ρ − ρn∥L1((0,T )×R) −→ 0,
+∥ξ − ξn∥L1((0,T )) −→ 0.
+We recall, that ξ, ξn ∈ I0(B1) are r-Lipschitz. On any open subset of [0, T ] there exists a point t where the
+continuous function ξ(·) − ξn(·) is less or egal to its L1-average. Using the fact that [0, T ] can be covered
+by a finite ǫ-network and that the derivative of ξ(·) − ξn(·) is bounded on this network, we recover that
+∥ξ − ξn∥∞ −→ 0 when ∥ρ − ρn∥L1((0,T )×R) −→ 0. This proves the third point of Proposition 3.1.
+Eventually, let ρ1, ρ2 ∈ B1, λ ∈ [0, 1]; it is readily checked that λρ1 + (1 − λ)ρ2 still satisfies (3.4). Then B1
+is convex. It is also readily checked that we can pass to the L1((0, T ) × R) limit in (3.4), proving that B1 is
+closed. By construction B1 is bounded. That ends the proof of Proposition 3.1.
+3.2. The general cost case evaluated for a subjective density. In the same setup (1.4), let’s
+further prospect the situation for a cost function c verifying (1.5). Most of the items of Proposition 3.1 hold
+with the set B1. The first point is independent of the nature of c. The third point proof still holds with
+general cost if the second point holds. Proof of existence and unicity of ξ ∈ L∞((0, T )) is still valid. In fact,
+the main issue lies in proving that ξ is Lipschitz for any ρ in a given set B.
+In order to explore this issue, let’s start from Lemma 3.3 estimate (3.5):
+2 |ξ(t) − ξ(s)| ≤
+�����
+�
+¯
+ξ
+−1
+c(ρ(t, x)) − c(ρ(s, x)) dx −
+� 1
+¯ξ
+c(ρ(t, x)) − c(ρ(s, x)) dx
+�����
+Recall that c satisfies (1.5). We set ¯α := esssupu∈[0,1] c′(u), ¯α := essinfu∈[0,1] c′(u) > 0. Using the negative
+and positive parts of (ρ(t, ·) − ρ(s, ·)), rearranging the terms we get the following estimate:
+2 |ξ(t) − ξ(s)| ≤
+� ¯α + ¯α
+2
+� �����
+�
+¯
+ξ
+−1
+ρ(t, x) − ρ(s, x) dx −
+� 1
+¯ξ
+ρ(t, x) − ρ(s, x) dx
+�����
++
+� ¯α − ¯α
+2
+� � 1
+−1
+|ρ(t, x) − ρ(s, x)| dx =: I1 + I2
+(3.6)
+The first term I1 of the right member is controlled by the estimate of Lemma 3.2. The issue lies in controlling
+the second term I2. This suggests that, in order to prove that ξ ∈ W 1,∞((0, T )) we need an estimate of
+the modulus of continuity of ρ as an element of C0([0, T ], L1(R)). While the standard Oleinik regularizing
+effect can be used locally away from the turning curve (see [5]), in a vicinity of the turning curve the spatial
+variation of ρ may not be controlled; moreover, (ir)regularity of the turning curve itself impacts the modulus
+of continuity of ρ, making it an open question how to control time variations of ρ. We leave this issue for
+future research.
+However, we can treat a natural modification of problem (1.4) for which the method applied for the affine
+cost (3.3) extends to general costs. Let R : L1((−∞, T )) −→ L1((0, T )) be the operator defined by:
+(3.7)
+R[ρ(·, x)](t) := δ
+� t
+−∞
+ρ(s, x)e−δ(t−s) ds
+To make this operator well defined, we extend ρ by ρ(t) = ρ0 for any t ∈ [−∞, 0]. This model corresponds to
+a memory effect in individual’s perception of the density; R[ρ] is a subjective density perceived by an agent
+
+AN EXISTENCE RESULT FOR HUGHES’ MODEL
+13
+making decision to move towards the most appropriate exit. Thus, we consider the problem:
+(1.4a)
+(1.4b’)
+
+
+
+
+
+ρt + [sign(x − ξ(t))ρv(ρ)]x = 0
+� ξ(t)
+−1
+c(R[ρ(·, x)](t))dx =
+� 1
+ξ(t)
+c(R[ρ(·, x)](t))dx,
+with c verifying (1.5), and with initial datum satisfying (1.3).
+Equation (1.4b’) takes into account the average density over the recent past instead of the instantaneous
+density at a time t. This models the bias, due to some inertia of human thinking, towards perception of
+the density for the pedestrians in the corridor; the quantity R[ρ(·, x)] can be compared to other “subjective
+densities” used in the literature (cf. [10], [8, 7]). With the same calculations as (3.6), we recover the term
+I2 =
+� 1
+−1
+���R[ρ(·, x)](t) − R[ρ(·, x)](s)
+��� dx,
+which is controlled by 2δ∥ρ∥L∞|t−s|, a bound for the modulus of continuity of R[ρ(·, x)]. For I1 we can pass
+the absolute value inside the integral. Then I1 is also controlled by the modulus of continuity of R[ρ(·, x)].
+Notice that we don’t need the property (1.16) for this reasoning. Consequently, we define:
+(3.9)
+B2 = {ρ ∈ BL1(0, T ∥ρ0∥L1) s.t. 0 ≤ ρ ≤ 1} .
+Then, Iδ : (B2, ∥ · ∥L1((0,T )×R)) −→ (W 1,∞((0, T )), ∥ · ∥∞), ρ �→ ξ where ξ is defined by (1.4b’) with R given
+by (3.7), is well defined. The analogue of Proposition 3.1 - where we use Iδ instead of I0, we use B2 instead
+of B1 and we drop the assumption of affine cost - is easily justified. In particular, the proof for the third
+item of this analogue of Proposition 3.1 holds with these choices. Thus, without the restriction (3.3) on the
+cost, we have the following claim:
+Proposition 3.4. Let ρ0 satisfy (1.3). Let c verifying (1.5). Then problem (1.6a)-(1.6b)-(1.4b’) admits at
+least one solution.
+3.3. The general cost case with relaxed equilibrium. We consider (1.6) with a modified equi-
+librium equation (1.4b). This time, we suppose that collective behavior of pedestrians makes appear some
+amount of inertia in the dynamics of ξ. Fixing ǫ > 0, we consider as a simplest variant of such dynamics the
+ODE Cauchy problem
+(3.10a)
+(3.10b)
+
+
+
+
+
+
+
+
+
+
+
+−ǫ ˙ξ(t) =
+� 1
+ξ(t)
+c(ρ(t, x))dx −
+� ξ(t)
+−1
+c(ρ(t, x))dx
+� 1
+ξ(0)
+c(ρ0(x))dx −
+� ξ(0)
+−1
+c(ρ0(x))dx = 0.
+for the ρ-driven evolution of the turning curve ξ. Formally, the case ǫ = 0+ corresponds to the standard
+Hughes’s relation between the density and the turning curve; ǫ > 0 models a form of relaxation to the
+equilibrium given by this standard model. The primitive form of the Hughes’ model, where the position of
+the turning curve is determined by an instantaneous Hamilton-Jacobi equation, should be modified to fit
+this dynamics of the turning curve; this modeling issue will be discussed elsewhere.
+Proposition 3.5. Let ρ ∈ L1((0, T )×R). Let c verifying the conditions (1.5). There exists a unique solution
+ξ to the Cauchy problem (3.10). Furthermore, ξ is Lipschitz and the Lipschitz constant is independent of ρ.
+Proof. Let’s denote:
+Ψ(t, a) := 1
+ǫ
+�� 1
+a
+c(ρ(t, x))dx −
+� a
+−1
+c(ρ(t, x))dx
+�
+.
+Notice that for any a, b ∈ [−1, 1], t ∈ R,
+|Ψ(t, a) − Ψ(t, b)| ≤ 1
+ǫ
+�����
+� b
+a
+2c(ρ(t, x)) dx
+����� ≤ 2∥c∥∞
+ǫ
+|a − b|.
+(3.11)
+
+14
+B. ANDREIANOV, T. GIRARD
+We also have, for any ξ such that ∥ξ∥∞ ≤ 1:
+|Ψ(t, ξ(t))| ≤ 1
+ǫ
+����
+� 1
+−1
+sign(x − ξ(t))c(ρ(t, x)) dx
+���� ≤ 2∥c∥∞
+ǫ
+So Ψ is Lipschitz with respect to the a variable and uniformly bounded with respect to the t variable. We
+apply the Cauchy-Lipschitz Theorem and recover that there exists a unique local solution to the Cauchy
+problem (3.10). Using (3.11), we recover that the solution is global on [0, T ] and that ξ is Lipschitz; moreover,
+the Lipschitz constant of ξ does not depend on ρ.
+Remark 3.6. From Proposition 3.5, it follows that
+�Iǫ : L1((0, T ) × R, [0, 1]) −→ W 1,∞((0, T ))
+that maps any to ρ to the unique ξ solution to (3.10) is well defined.
+Proposition 3.7. Let ρ1, ρ2 ∈ L1((0, T ) × R). Let’s denote ξ1,2 := �Iǫ(ρ1,2). Then,
+(3.12)
+∥ξ1 − ξ2∥∞ ≤ ∥c′∥∞
+ǫ
+exp
+�2T ∥c∥∞
+ǫ
+�
+∥ρ1 − ρ2∥L1((0,T )×(−1,1))
+Proof. We denote ξ0 the unique solution to (3.10b). Then, for any t ∈ [0, T ]:
+ξ1,2 = ξ0 −
+� t
+0
+Ψ1,2(s, ξ1,2(s)) ds
+Then, writing ∨, ∧ for min, max, repsectively, we make the following calculations:
+ξ2(t) − ξ1(t)
+=
+� t
+0
+Ψ1(s, ξ1(s)) − Ψ2(s, ξ2(s)) ds
+= 1
+ǫ
+� t
+0
+�� ξ1(s)
+−1
+c(ρ1(s, x)) dx −
+� 1
+ξ1(s)
+c(ρ1(s, x)) dx −
+� ξ2(s)
+−1
+c(ρ2(s, x)) dx +
+� 1
+ξ2(s)
+c(ρ2(s, x)) dx
+�
+ds
+= 1
+ǫ
+� t
+0
+�� (ξ1∨ξ2)(s)
+−1
+c(ρ1(s, x)) − c(ρ2(s, x)) dx ±
+� (ξ1∧ξ2)(s)
+(ξ1∨ξ2)(s)
+c(ρ1(s, x)) + c(ρ2(s, x)) dx
++
+� 1
+(ξ1∧ξ2)(s)
+c(ρ2(s, x)) − c(ρ1(s, x)) dx
+�
+ds
+And consequently,
+|ξ1(t) − ξ2(t)| ≤ 1
+ǫ
+� t
+0
+� (ξ1∧ξ2)(s)
+(ξ1∨ξ2)(s)
+c(ρ1(s, x)) + c(ρ2(s, x)) dx ds
++ 1
+ǫ
+� t
+0
+� 1
+−1
+|c(ρ1(s, x)) − c(ρ2(s, x))| ds dx =: J1 + J2.
+For the term J2 we can use the Lagrange inequality denoting ∥c′∥∞ := supp∈[0,1] |c′(p)|. We get:
+J2 ≤ ∥c′∥∞
+ǫ
+∥ρ1 − ρ2∥L1((0,T )×(−1,1)).
+For the the term J1, notice that, thanks to the cost conditions (1.5), for any s ∈ [0, t],
+2|ξ1(s) − ξ2(s)| ≤
+� (ξ1∧ξ2)(s)
+(ξ1∨ξ2)(s)
+c(ρ1(s, x)) + c(ρ2(s, x)) dx ≤ 2∥c∥∞|ξ1(s) − ξ2(s)|
+
+AN EXISTENCE RESULT FOR HUGHES’ MODEL
+15
+Consequently for any s ∈ [0, T ], there exists β(s) ∈ [2 , 2 ∥c∥∞] such that
+� (ξ1∧ξ2)(s)
+(ξ1∨ξ2)(s)
+c(ρ1(s, x)) + c(ρ2(s, x)) dx = β(s)|ξ1(s) − ξ2(s)|.
+Then β ∈ L∞((0, T )) ⊂ L1((0, T )). We are now in a position to use Gronwall’s inequality with integrable
+coefficients. That inequality still holds without the continuity of β if we use the Lebesgue differentiation
+Theorem. We thus reach to
+|ξ1(t) − ξ2(t)| ≤
+� t
+0
+β(s)
+ǫ
+|ξ1(s) − ξ2(s)| ds + ∥c′∥∞
+ǫ
+∥ρ1 − ρ2∥L1
+which yields the subsequent estimates
+|ξ1(t) − ξ2(t)| ≤ ∥c′∥∞
+ǫ
+∥ρ1 − ρ2∥L1 exp
+�� t
+0
+β(s)
+ǫ
+ds
+�
+,
+∥ξ1 − ξ2∥∞ ≤ ∥c′∥∞
+ǫ
+exp
+�2T ∥c∥∞
+ǫ
+�
+∥ρ1 − ρ2∥L1
+Remark 3.8. One can check that, in the relaxed equilibrium setting, we never used any property of ρ apart
+from the universal bounds 0 ≤ ρ ≤ 1. Consequently, in this case we also use:
+(3.9)
+B2 = {ρ ∈ BL1(0, T ∥ρ0∥L1) s.t. 0 ≤ ρ ≤ 1}
+Here’s the final result in this relaxed equilibrium setting:
+Proposition 3.9. Let ρ0 satisfy (1.3). Let c verifying (1.5). Then problem (1.6a)-(1.6b)-(3.10) admits at
+least one solution.
+Proof. We only have to apply Corollary 1.9 with B2 as a B set and check that, using Propositions 3.5 and
+3.7, all the assumptions on �Iǫ are satisfied.
+4. Hughes’ model with constrained evacuation at exit. In this section, we illustrate the robust-
+ness of our approach by modifying the Hughes model at the level of boundary conditions for the density,
+allowing for the realistic feature of capacity drop (see [8, 7] and references therein). We consider the following
+dynamics for ρ introduced in [8] on the basis of the theory of [11, 3]:
+(4.1a)
+(4.1b)
+(4.1c)
+(4.1d)
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ρt+ [sign(x − ξ(t))f(ρ)]x = 0
+f(ρ(t, 1)) ≤ g
+�� 1
+σ
+w1(x)ρ(t, x) dx
+�
+f(ρ(t, −1)) ≤ g
+�� −σ
+−1
+w−1(x)ρ(t, x) dx
+�
+ρ(0, ·) = ρ0(·).
+The equations (4.1b)-(4.1c) prescribe the behaviour at exits situated at x = ±1; as in previous sections,
+we set up the conservation law for ρ in the whole space, but the initial condition (1.3) is confined to the
+domain of interest (−1, 1). The flux f(ρ) of pedestrian going through the exits is limited by respective
+constraints (we take a common nonlinearity g for the sake of conciseness, but it is straightforward to extend
+the setting distinguishing g1 and g−1). This flux limiter g depends non locally of ρ(t, ·) and of a weight w
+supported in a vicinity of length 1 − σ around the exits. This type of constraint models the well-known
+phenomenon of capacity drop which, in extreme situations, corresponds to a panic behaviour at exits located
+at x = ±1, as discussed in [8] and [7]. This model, allowing to consider constrained evacuation at exits, is
+phenomenologically more relevant than the model with open-end condition considered above (and it includes
+the previous model, for the trivial choice g ≡ max[0,1] f, see Remark 4.3). As an example, this constrained
+evacuation model is able to reproduce the “Faster is Slower” effect at exits (see [7]).
+In the following, we’ll use the results of [7] and adapt them to our framework. We use the notations proposed
+in this paper:
+
+16
+B. ANDREIANOV, T. GIRARD
+• Since f is concave positive such that f(0) = f(1), there exists a ¯ρ ∈ [0, 1] such that f ′(ρ)(¯ρ − ρ) > 0
+for a.e. ρ ∈ [0, 1].
+• We fix σ ∈ (0, 1). This is the threshold of influence on the exit, meaning that the pedestrian located
+before x = σ have no influence on the exit congestion at x = 1.
+Let us take the strongest assumptions used in [8, 7]:
+�
+w1 ∈ W 1,∞((σ, 1], R+) s.t.
+� 1
+σ w1 = 1
+w−1 ∈ W 1,∞([−1, −σ), R+) s.t.
+� −σ
+−1 w−1 = 1
+(4.2)
+g ∈ W 1,∞(R+, (0, f(¯ρ)]) is non-increasing.
+(4.3)
+We can now introduce the notion of solution we’ll use for ρ combining the one in [11] and Definition 1.1:
+Definition 4.1. Let ξ ∈ W 1,∞((0, T ), (−1, 1)). Let ρ0 ∈ L1(R, [0, 1]) supported in [−1, 1]. Let f be a con-
+cave positive flux such that f(0) = 0 = f(1) and F(t, x, ρ) := sign(x − ξ(t))f(ρ). Let g, ω−1 and ω1 satisfy
+(4.2)-(4.3).
+We say that ρ ∈ L1((0, T ) × R) is an admissible solution to (4.1) if:
+for all φ ∈ C∞
+c ((0, T ) × R),
+(4.4)
+��
+(0,T )×R
+ρφt + F(t, x, ρ)φx dt dx = 0,
+moreover, setting
+Q−1(t) := g
+�� −σ
+−1
+w−1(x)ρ(t, x) dx
+�
+, Q1(t) := g
+�� 1
+σ
+w1(x)ρ(t, x) dx
+�
+,
+(4.5)
+there holds:
+• For all positive φ ∈ C∞ − c({x > ξ(t)}), for all k ∈ R,
+−
+��
+(0,T )×R
+|ρ − k| φt + q(ρ, k)φx dt dx − 2
+� T
+0
+�
+1 − Q1(t)
+f(¯ρ)
+�
+f(k)φ(t, 1) dx −
+�
+R
+|ρ0 − k|φ(0, x) dx ≤ 0.
+(4.6)
+• For all positive φ ∈ C∞
+c ({x < ξ(t)}), for all k ∈ R,
+−
+��
+(0,T )×R
+|ρ − k| φt + q(ρ, k)φx dt dx − 2
+� T
+0
+�
+1 − Q−1(t)
+f(¯ρ)
+�
+(−f(k)) φ(t, −1) dx −
+�
+R
+|ρ0 − k|φ(0, x) dx ≤ 0.
+(4.7)
+• For all positive φ ∈ C∞ supported on [a, b] such that a < −1, 1 < b we have:
+� T
+0
+� −1
+a
+ρφt + F(t, x, ρ)φx dt dx ≤
+� T
+0
+Q−1(t)φ(t, −1) dt
+(4.8a)
+� T
+0
+� b
+1
+ρφt + F(t, x, ρ)φx dt dx ≤
+� T
+0
+Q1(t)φ(t, 1) dt
+(4.8b)
+Remark 4.2. As detailled in [3], equations (4.8) combined with the weak solution property (4.4) imply that
+for a.e. t ≥ 0, f(γ1
+L,Rρ(t)) ≤ Q1(t) and −f(γ−1
+L,Rρ(t)) ≥ −Q−1(t). This corresponds to the expected limited
+flux condition.
+Remark 4.3. One can notice that if for all t ≥ 0, g(t) = f(¯ρ) then the flux is not limited at exits and
+1 − Q1(t)
+f(¯ρ) = 1 − Q−1(t)
+f(¯ρ)
+= 0. Then, this definition is exactly Definition 1.1.
+We have the following results:
+
+AN EXISTENCE RESULT FOR HUGHES’ MODEL
+17
+Proposition 4.4. Let ρ0 verify (1.3). Let ξ ∈ W 1,∞((0, T ), (−1, 1)). There exists a solution to (4.1) in the
+sense of Definition 4.1.
+The proof of Proposition 4.4 is postponed to the Appendix. It is obtained via a convegent finite volume
+scheme. The details of the scheme and the proof of convergence can be found there.
+Using the results from [11], [7], [8] and a partitionning argument we prove a corollary of Theorem 1.8:
+Corollary 4.5. Let ρ0 verify (1.3). Let ξ ∈ W 1,∞((0, T ), (−1, 1)). There exists at most one solution ρ of
+(4.1) in the sense of Definition 4.1. Using Proposition 4.4, the solver operator
+Sg : (W 1,∞((0, T ), (−1, 1)), ∥ · ∥∞) −→ (L1((0, T ) × (−1, 1)), ∥ · ∥L1),
+that maps any ξ to the unique solution ρ to (4.1) is well defined and continuous.
+Proof of Corollary 4.5. We use of the classical embedding of W 1,∞( [0, T ], (−1, 1)) into C0([0, T ], (−1, 1)):
+there exists K a closed segment of (−1, 1) such that ξ ∈ C0([0, T ], K). We consider (φi)i∈{−1,0,1} a partition
+of the unity of an open set containing [−1, 1] such that:
+All the supports are segments and 1 ∈ supp(φ1), −1 ∈ supp(φ−1) and K ⊂ supp(φ0) ⊂ (−1, 1)
+[supp(φ−1) ∪ supp(φ1)]
+�
+K = ∅
+Let ρ, ˆρ be two solutions in the sense of Definition 4.1. We denote ˆQ1,−1 the constraints associated with ˆρ.
+Let Ψ ∈ C∞
+c ((0, T )×R). We use the classic Kruzkhov doubling of variables (cf. [14]) in the open subdomains
+of (0, T ) × R situated between x = −∞ and x = −1, x = −1 and x = ξ(t), x = ξ(t) and x = 1, and finally
+between x = 1 and x = +∞. Then by a limiting procedure analogous to the one employed in the proof
+of Theorem 2.1, we obtain the Kato inequality carrying singular terms concentrated on the three curves
+{x = ξ(t)}, {x = 1} and {x = −1}:
+−
+��
+(0,T )×(−1,1)
+|ρ − ˆρ|φt + q(ρ, ˆρ)φx
+≤
+� T
+0
+Ψ(t, ξ(t)) (φ0 + φ−1 + φ1) (t, ξ(t))
+�
+q0
+R(γRρ, γRˆρ) − q0
+L(γLρ, γL ˆρ)
+�
+(4.9a)
++
+� T
+0
+Ψ(t, 1)φ1(t, 1)
+�
+q1(γRρ, γRˆρ) − q1(γLρ, γLˆρ)
+�
+(4.9b)
++
+� T
+0
+Ψ(t, −1)φ−1(t, −1)
+�
+q−1(γRρ, γRˆρ) − q−1(γLρ, γLˆρ)
+�
+,
+(4.9c)
+where the left and right traces are taken along their respective curves, and
+q0
+L,R(ρ, ˆρ) := sign(ρ − ˆρ)
+�
+fL,R(ρ) − fL,R(ˆρ) − ˙ξ(t) (ρ − ˆρ)
+�
+q1(ρ, ˆρ) := sign(ρ − ˆρ) [fR(ρ) − fR(ˆρ)]
+q−1(ρ, ˆρ) := sign(ρ − ˆρ) [fL(ρ) − fL(ˆρ)] .
+Referring to proof of Theorem 2.1, the integral (4.9a) is zero. Using the same argument as the proof of
+Proposition 2.10 in [3], we get:
+(4.9b) ≤ 2
+� T
+0
+Ψ(t, 1)
+���Q1(t) − ˆQ1(t)
+��� dt
+(4.9c) ≤ 2
+� T
+0
+Ψ(t, −1)
+���Q−1(t) − ˆQ−1(t)
+��� dt
+As in the proof of Theorem 2.1, we integrate (4.9) along a trapezoid T 0,t
+a,b . Then we use the definition of
+Q±1, ˆQ±1 with Lg the Lipschitz constant of g to get the following inequality:
+∥ρ(t, ·) − ˆρ(t, ·)∥L1((a,b)) ≤ ∥ρ0 − ˆρ0∥L1((a−Lft,b+Lft)) + 2
+� t
+0
+� 1
+−1
+Lg
+�
+1(−1,−σ)ω−1 + 1(σ,1)ω1
+�
+|ρ − ˆρ| dx ds.
+
+18
+B. ANDREIANOV, T. GIRARD
+Eventually, using Holder’s inequality and Gronwall’s Lemma, we get:
+(4.10)
+∥ρ(t, ·) − ˆρ(t, ·)∥L1((a,b)) ≤ ∥ρ0 − ˆρ0∥L1((a−Lft,b+Lft))eCt,
+where C := 2Lg∥1(−1,−σ)ω−1 + 1(σ,1)ω1∥∞. Consequently, there is at most one solution in the sense of
+Definition 4.1 associated to a fixed ξ turning curve and an initial datum ρ0.
+In order to recover the continuity of the operator Sg we proceed the same way as we proved Proposition
+1.8. We first cover any compact set contained in {ξ(t) < x < 1} by trapezoids. Without loss of generality,
+we can suppose those trapezoids are at distance at least ǫ of the both interfaces {x = ξ(t)} and {x = 1}.
+Consequently, on any trapezoid, for all n ≥ n0, ρn is a Kruzhkov entropy solution. We recover compacity
+thanks to the averaging compactness lemma. This reasoning can be reproduced in the three other parts
+of the domain: {x < −1}, {−1 < x < ξ(t)} and {x > 1}. Then, we can pass to the limit via dominated
+convergence in equation (4.4) and in all the inequalities (4.6)-(4.7)-(4.8). We conclude the proof with the
+same classical arguments as the proof of Proposition 1.8. That ends the proof of Corollary 4.5.
+We are ready to state the main result of this section which is an analog of Theorem 1.9.
+Theorem 4.6. Let ρ0 verify (1.3). Assume that f verifies (1.13). Let g (resp. ω1,−1) satisfy (4.3) (resp.
+(4.2)). Let B a convex closed bounded subset of L1((0, T ) × R) and
+I : (B, ∥ · ∥L1((0,T )×R)) −→
+(C0([0, T ], R), ∥ · ∥∞)
+be a continuous operator such that ∀ρ ∈ B, ∀t ∈ [0, T ], I[ρ](t) ∈ (−1, 1). If there exists r > 0 such that
+(1.14a)-(1.14b) hold, then there exists (ρ, ξ) a solution to the problem (4.1)-(1.6b)-(1.6c). Here ρ is a solution
+in the sense of Definition 4.1. In particular, existence is verified for I = I0 (for affine cost) or with I = Iδ
+or �Iǫ (for general cost verifying (1.5)).
+Appendix A. Convergence of the finite volume scheme in the constrained case. In order to prove
+existence of a solution to (4.1) in the sense of Definition 4.1, we construct a converging finite volume scheme
+adapted around the fixed turning curve ξ. At the exits we use an operator splitting method with a scheme
+for the constraints Q1 and Q−1 as in [7].
+We now present the scheme used in this setting. Let T, J ∈ N such that:
+(CFL)
+2
+�
+∥f ′∥∞ + ∥ ˙ξ∥∞
+� J
+T ≤ 1.
+We construct the following scheme:
+∆t = 1
+T ,
+tn := n∆t,
+(A.1a)
+∆x = 1
+J ,
+xj = j∆x,
+(A.1b)
+sn := 1
+∆t
+� tn+1
+tn
+˙ξ(s) ds,
+s∆(t) :=
+N
+�
+1
+1[tn,tn+1)(t)sn,
+(A.1c)
+ξ∆(t) := ξ(0) +
+� t
+0
+s∆(s) ds,
+ξn = ξ∆(tn).
+(A.1d)
+The discretization (A.1c)-(A.1d) of the ξ interface is detailled in [22] Section 3.1 where it is required to
+construct the adapted mesh.
+For any n, we denote jn the unique element of �−J, J� such that ξn ∈
+[xjn, xjn+1). We construct the following mesh:
+χn
+j :=
+
+
+
+xj if j ≤ jn − 1
+yn if j = jn
+xj if j ≥ jn + 1
+Pn
+j+1/2 :=
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+(χn
+j , χn
+j+1) × (tn, tn+1)
+if j ≤ jn − 2
+the trapezoid χn
+jn−1 χn+1
+jn−1 χn+1
+jn+1 χn
+jn
+if j = jn − 1
+the trapezoid χn
+jn χn+1
+jn+1 χn+1
+jn+2 χn
+jn+2
+if j = jn
+(χn
+j+1, χn
+j+2) × (tn, tn+1)
+if j ≥ jn + 1
+(A.1e)
+
+AN EXISTENCE RESULT FOR HUGHES’ MODEL
+19
+Notice that, thanks to the (CFL) condition, xjn−1 < ξn+1 < xjn+2 so the trapezoids defined above are never
+reduced to a triangle. We denote Pn
+j+1/2 (resp. Pn
+j+1/2) the bottom (resp. top) segment of the tapezoid
+Pn
+j+1/2. However, now that the mesh is modified we have two different partitions for the line t = tn+1:
+(Pn+1
+j+1/2)j∈Z and (Pn
+j+1/2)j∈Z. We define (¯ρn+1
+i+1/2)i∈Z corresponding to the values of ρn+1 on (Pn
+i+1/2)i∈Z and
+(ρn
+j+1/2)j∈Z the projection of this values on (Pn
+j+1/2)j∈Z.
+¯ρn+1
+j+1/2 =
+ρn
+j+1/2
+����Pn
+j+1/2
+���� − ∆t(f n
+j+1 − f n
+j )
+����Pn
+j+1/2
+����
+(A.1f)
+ρn+1
+j+1/2 :=
+1
+����Pn+1
+j+1/2
+����
+�
+i∈Z
+����Pn+1
+j+1/2
+�
+Pn
+i+1/2
+���� ¯ρn+1
+i+1/2
+(A.1g)
+ρ∆(t, x) :=
+N
+�
+n=0
+�
+j ∈ Z
+j ̸= jn ± 1
+ρn
+j+1/2 1Pn
+j+1/2(t, x)
+(A.1h)
+We now want to define the numerical fluxes (f n
+j )j∈Z corresponding to the left and right edges of the trape-
+zoids. It is worth noticing that we skipped f n
+jn+1 when we constructed the mesh. We first define the non-local
+constraint approximation.
+ρn
+∆x(·) =
+�
+j∈Z
+ρn
+j+1/21[χn
+j ,χn
+j+1)(·)
+(A.1i)
+qn
+1 := g1
+�� 1
+σ
+ρn
+∆x(x)ω1(x) dx
+�
+(A.1j)
+qn
+−1 := g−1
+�� −σ
+−1
+ρn
+∆x(x)ω−1(x) dx
+�
+(A.1k)
+F(ρn
+j−1/2, ρn
+j+1/2) =
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+min
+�
+Godf(ρn
+j−1/2, ρn
+j+1/2) , qn
+1
+�
+if j − 1 = J
+max
+�
+God−f(ρn
+j−1/2, ρn
+j+1/2) , −qn
+−1
+�
+if j = −J
+Fn
+int(ρn
+j−1/2, ρn
+j+1/2)
+if j = jn
+Godf(ρn
+j−1/2, ρn
+j+1/2)
+if j > jn and j − 1 ̸= J
+God−f(ρn
+j−1/2, ρn
+j+1/2)
+if j < jn and j ̸= −J.
+(A.1l)
+Eventually, we define Fn
+int as in [6] (see details in Subsections 2.5, 3.3 and 5.1):
+f n
+L,R(ρ) := ±f(ρ) − snρ
+∀(ρL, ρR) ∈ [0, 1]2, ∃k ∈ [0, 1] s.t. Godf n
+L(ρL, k) = Godf n
+R(k, ρR)
+Fn
+int(ρn
+j−1/2, ρn
+j+1/2) := Godf n
+L(ρn
+j−1/2, k) = Godf n
+R(k, ρn
+j+1/2)
+(A.1m)
+Numerical simulations with for this scheme can be found in [6, Sect. 5.1] for the case of open-end condition
+at exits.
+We are now in a position to start the proof of convergence, which merely assembles with the help of the
+partition-of-unity technique of [22, 6] the arguments from [6] (for the inner interface situated at x = ξ(t) and
+[7] (for the constraints set at x = ±1).
+Proof of Proposition 4.4. The proof follows the general idea of [22, Sect. 4], see also [6]. Since the interfaces
+{x = −1}, {x = ξ(t)} and {x = 1} are non-intersecting, we isolate them in the supports of a partition of
+unity φ−1, φ0 and φ1. We fix a test function φ. Taking (the discretization of) the test function φ0φ we can
+
+20
+B. ANDREIANOV, T. GIRARD
+use the specific result for the Hughes’ model treated in [6, Sect. 5.1] to recover the approximate entropy
+inequalities satisfied by the discrete solution, with the test function φ0φ. For test functions φ−1φ and φ1φ,
+we use in the same way the result of [7, Prop. 3.1]. Summing up the contributions of the three parts of the
+partition of unity, we obtain approximate entropy inequality for the discrete solution, with arbitrary test
+function φ. In addition, the integral weak formulation for the approximate solution follows from the scheme’s
+conservativity. We use the same compactness argument as in [22, Sect. 3.4]. We can pass to the limit in
+the approximate weak formulation and in the approximate entropy inequalities, for the chosen converging
+subsequence and arbitrary test function. This allows us to characterize the limit as an entropy solution in
+the sense of Definition 4.1 of the problem at hand. Finally, thanks to the uniqueness proven in Theorem 4.5,
+the whole sequence of discrete solutions converges to the unique solution in the sense of Definition 4.1.
+Acknowledgments. This paper has been supported by the RUDN University Strategic Academic Leader-
+ship Program.
+REFERENCES
+[1] D. Amadori and M. Di Francesco, The one-dimensional hughes model for pedestrian flow: Riemann—type solutions,
+Acta Math. Sci. Ser. B Engl. Ed., 32 (2012), pp. 259–280.
+[2] D. Amadori, P. Goatin, and M. D. Rosini, Existence results for hughes’ model for pedestrian flows, J. Math. Anal.
+Appl., 420 (2014), pp. 387–406.
+[3] B. Andreianov, P. Goatin, and N. Seguin, Finite volume schemes for locally constrained conservation laws, Numer.
+Math. (Heidelb.), 115 (2010), pp. 609–645.
+[4] B. Andreianov, K. H. Karlsen, and N. H. Risebro, A theory of L1-dissipative solvers for scalarconservation laws
+with discontinuous flux, Arch. Ration. Mech. Anal., 201 (2011), pp. 27–86.
+[5] B. Andreianov, M. D. Rosini, and G. Stivaletta, On existence, stability and many-particle approximation of
+solutions of 1D Hughes model with linear costs. working paper or preprint, July 2021.
+[6] B. Andreianov and A. Sylla, Finite volume approximation and well-posedness of conservation laws with moving
+interfaces under abstract coupling conditions. submitted, 2022.
+[7] B. P. Andreianov, C. Donadello, U. Razafison, and M. D. Rosini, Qualitative behaviour and numerical approxi-
+mation of solutions to conservation laws with non-local point constraints on the flux and modeling of crowd dynamics
+at the bottlenecks, Mathematical Modelling and Numerical Analysis, 50 (2015), pp. 1269–1287.
+[8] B. P. Andreianov, C. Donadello, and M. D. Rosini, Crowd dynamics and conservation laws with nonlocal constraints
+and capacity drop, Mathematical Models and Methods in Applied Sciences, 24 (2014), pp. 2685–2722.
+[9] C. Cancès and T. Gallouët, On the time continuity of entropy solutions, J. Evol. Equ., 11 (2011), pp. 43–55.
+[10] J. A. Carrillo, S. Martin, and M.-T. Wolfram, An improved version of the hughes model for pedestrian flow,
+Mathematical Models and Methods in Applied Sciences, 26 (2016), pp. 671–697.
+[11] R. M. Colombo and P. Goatin, A well posed conservation law with a variable unilateral constraint, J. Differ. Equ.,
+234 (2007), pp. 654–675.
+[12] M. Di Francesco, P. A. Markowich, J.-F. Pietschmann, and M.-T. Wolfram, On the hughes’ model for pedes-
+trian flow: The one-dimensional case, J. Differ. Equ., 250 (2011), pp. 1334–1362.
+[13] N. El-Khatib, P. Goatin, and M. D. Rosini, On entropy weak solutions of hughes model for pedestrian motion,
+Zeitschrift für angewandte Mathematik und Physik, 64 (2013), pp. 223–251.
+[14] L. C. Evans, Partial Differential Equations, Graduate Studies in Mathematics, American Mathematical Society, Provi-
+dence, RI, May 1998.
+[15] P. Goatin and M. Mimault, The wave-front tracking algorithm for hughes’ model of pedestrian motion, SIAM J. Sci.
+Comput., 35 (2013), pp. B606–B622.
+[16] D. A. Gomes and R. M. Velho, On the hughes model and numerical aspects, (2016).
+[17] R. L. Hughes, A continuum theory for the flow of pedestrians, Transportation Research Part B-methodological, 36
+(2002), pp. 507–535.
+[18] M. J. Lighthill and G. B. Whitham, On kinematic waves. ii. a theory of traffic flow on long crowded roads, Proceedings
+of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 229 (1955), pp. 317–345.
+[19] B. Perthame, Kinetic formulation of conservation laws, Oxford Lecture Series in Mathematics and its Applications,
+Clarendon Press, Oxford, England, Jan. 2003.
+[20] P. I. Richards, Shock waves on the highway, Operations research, 4 (1956), pp. 42–51.
+[21] A. Sylla, Influence of a slow moving vehicle on traffic: Well-posedness and approximation for a mildly nonlocal model,
+Networks and Heterogeneous Media, 16 (2021).
+[22] A. Sylla, A lwr model with constraints at moving interfaces, ESAIM: Mathematical Modelling and Numerical Analysis,
+56 (2022).
+[23] M. Twarogowska, P. Goatin, and R. Duvigneau, Numerical study of macroscopic pedestrian flow models, (2013).
+[24] A. Vasseur, Strong traces for solutions of multidimensional scalar conservation laws, Arch. Ration. Mech. Anal., 160
+(2001), pp. 181–193.
+[25] E. Zeidler, Applied functional analysis, Applied mathematical sciences, Springer, New York, NY, 1995 ed., Dec. 2012.
+
diff --git a/uNE5T4oBgHgl3EQfLQ6L/content/tmp_files/load_file.txt b/uNE5T4oBgHgl3EQfLQ6L/content/tmp_files/load_file.txt
new file mode 100644
index 0000000000000000000000000000000000000000..298634de4513c4a376f0f143db6d979087c1ff9b
--- /dev/null
+++ b/uNE5T4oBgHgl3EQfLQ6L/content/tmp_files/load_file.txt
@@ -0,0 +1,1217 @@
+filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf,len=1216
+page_content='arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='05472v1 [math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='AP] 13 Jan 2023 EXISTENCE OF SOLUTIONS TO A CLASS OF ONE-DIMENSIONAL MODELS FOR PEDESTRIAN EVACUATIONS∗ BORIS ANDREIANOV† AND THEO GIRARD ‡ Abstract.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' In the framework inspired by R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Hughes model (Transp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Res.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' B, 2002) for pedestrian evacuation in a corridor, we establish existence of a solution by a topological fixed point argument.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' This argument applies to a class of models where the dynamics of the pedestrian density ρ (governed by a discontinuous-flux Lighthill,Whitham and Richards model ρt + (sign(x − ξ(t))ρv(ρ))x = 0 ) is coupled via an abstract operator to the computation of a Lipschitz continuous “turning curve” ξ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We illustrate this construction by several examples, including the standard Hughes’ model with affine cost, and either with open-end conditions or with conditions corresponding to panic behaviour with capacity drop at exits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Other examples put forward versions of the Hughes model with inertial dynamics of the turning curve and general costs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Key words.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' crowd dynamics, pedestrian evacuation, Hughes’ model, capacity drop, existence, Schauder fixed-point, admissible solution, discontinuous-flux conservation law, memory, relaxation MSC codes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' 35L65, 47H10 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Introduction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' The Hughes model and its variants.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' The Lighthill,Whitham and Richards (LWR) model for traffic introduced in [18] and in [20] consists in a conservation law for the vehicule density ρ with a concave positive flux ρv(ρ): (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1) �ρt + [ρv(ρ)]x = 0 ρ(t = 0, x) = ρ0(x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Here, we can suppose that the density ρ takes its values in [0, 1] and v stands for the speed of the traffic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' This model can be seen as the mass conservation equation where velocity v depends only on the traffic density ρ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' One frequently chooses v(ρ) = 1 − ρ up to a multiplicative constant representing the maximal velocity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' This describes a transport of the initial density of agents ρ0 at t = 0 towards x = +∞ where the speed is decreasing when the density of agents is increasing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Then, in [17], Hughes proposed a model of pedestrian evacuation as a system of two equations on ρ and φ which is known as Hughes’ model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' In the multi-dimensional model, ρ is the density of pedestrians with respect to time t and space x.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' The dynamics of ρ is governed by LWR conservation laws with direction field oriented towards the exits of a bounded domain Ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' In order to prescribe the direction towards the exit preferred by a pedestrian at location x at a time t, Hughes defines φ(t, x), the “potential field” satisfying an eikonal equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' The potential φ is zero on the exits located on ∂Ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' A pedestrian would then choose to “descend the gradient” of this potential in order to leave the domain Ω by these exits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Theory of the Hughes’ model is yet incomplete, even in one space dimension.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' In the 1D case, the model of [17] takes the form: (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='2a) (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='2b) (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='2c) (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='2d) \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 ρt + [sign(−∂xφ)ρv(ρ)]x = 0 ρ(t, x = ±1) = 0 |∂xφ| = 1 v(ρ) φ(t, x = ±1) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' This problem (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='2) is set up in a corridor with two exits;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' upon renormalization, we assumed that Ω = (−1, 1) and that the exits are located at x = ±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' At t = 0 the pedestrians are distributed with a given density ρ0 defined in [−1, 1] and at t > 0, the pedestrians want to leave the corridor by either one of the exits (as if a ∗Submitted to the editors DATE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' †Institut Denis Poisson CNRS UMR 7013, Université de Tours, Université d’Orléans, Parc Grandmont, 37200 Tours, France and Peoples’ Friendship University of Russia (RUDN University) 6 Miklukho-Maklaya St, Moscow, 117198, Russian Federation (Boris.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='Andreianov@lmpt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='univ-tours.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='fr, https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='idpoisson.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='fr/andreianov/).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' ‡Institut Denis Poisson, Université de Tours, Parc Grandmont, 37200 Tours, France (theo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='girard@lmpt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='univ-tours.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='fr).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' 1 2 B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' ANDREIANOV, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' GIRARD fire alarm starts ringing at t = 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' The pedestrians move forward (with the positive flux ρ �→ +ρv(ρ)) or backward (with ρ �→ −ρv(ρ) ) depending of the sign of ∂xφ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' This results in (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='2a) being a discontinuous flux LWR conservation law.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' The sign of ∂xφ is prescribed by the eikonal equation (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='2c) where c(ρ) = 1 v(ρ) is a cost function that is high where the crowd is slow.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Consequently, the pedestrians tend to avoid those “congested” regions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' The Dirichlet boundary condition (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='2b) on the density ρ is understood in the Bardos-LeRoux-Nédélec sense standard for scalar conservation laws;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' it is shown in [5, Sect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' 3] that upon extending ρ0 by the value zero on R\\[−1, 1], one can replace the initial-boundary value problem (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='2a)-(1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='2b) with ρ0 : (−1, 1) −→ [0, 1] by the pure intitial-value problem for (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='2a) with the extended datum ρ0 : R −→ [0, 1] (the extension means that ρ0, now defined on R, is supported in [−1, 1]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We adopt this viewpoint and require, throughout the paper, (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='3) ρ0 ∈ L∞(R;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' [0, 1]), ρ(x) = 0 for x /∈ [−1, 1];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' note that being compactly supported, ρ0 ∈ L1(R).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Assumption (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='3) for the conservation law (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='2a) set up in the whole space can be seen as “open-end condition” at exits;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' we refer to Section 4 for models with more involved exit behavior.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' In [13], the 1D Hughes’ model (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='2) has been reformulated in terms of a “turning curve” ξ(t) instead of the potential φ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Following the turning curve approach, our prototype model in the sequel will be: (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='4a) (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='4b) \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 ρt + [sign(x − ξ(t))ρv(ρ)]x = 0 � ξ(t) −1 c(ρ(t, x)) dx = � 1 ξ(t) c(ρ(t, x)) dx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' with ρ defined for t ∈ [0, T ], T > 0, and x ∈ R and with initial datum of the form (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Here c denotes a generic cost function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' It is proven in [13] that we can equivalently consider either the Hughes’ model potential equation (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='2c)-(1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='2d) or the reformulated problem (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='4b) with the cost function c(ρ) = 1 v(ρ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' However, here, we will consider a cost verifying the following conditions: (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='5) \uf8f1 \uf8f2 \uf8f3 c ∈ W 1,∞([0, 1]), ∀ρ ∈ [0, 1], c(ρ) ≥ 1, c is increasing on [0, 1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' In (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='4), ρ is considered to be an entropy solution to (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='4a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Such notion of solution with a particular attention to the admissibility of the jump of ρ across the turning curve x = ξ(t) was proposed in [13] (we will slightly simplify this solution notion).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' On the other hand, ξ is a pointwise defined solution to (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='4b) whose existence in L∞ and uniqueness follows from the intermediate values theorem under the conditions (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' In this paper, we will consider a class of “turning curve” model’s generalisations, keeping in mind the fact that, even in the setting (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='4), little is known about the well-posedness of the Hughes’ model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' For notation’s sake, we consider f a generic concave positive flux such that f(0) = f(1) = 0 (one can assume f(ρ) = ρv(ρ) to recover the LWR model): (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='6a) (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='6b) (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='6c) \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 ρt + [sign(x − ξ(t))f(ρ)]x = 0 ρ(0, x) = ρ0(x) ξ = I(ρ) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Here I is an abstract operator mapping the density ρ to a turning curve ξ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' The problem (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='4) is a particular case of (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='6) where I is the solver of the integral equation (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='4b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Stating (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='6b), we mean that ρ0 fulfills (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='3) which corresponds to open-end evacuation at exits, as stated above.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Let us briefly discuss known results on the specific problem (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='4) and its variants.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' In [13] uniqueness is proven for a definition of entropy solutions taking the discontinuity into account but considering ξ as being given beforehand (we will revisit this result in Section 2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' In [2] global existence for Hughes’ model (with c(ρ) = 1 v(ρ)) is proven if one assumes that the density at the turning curve is zero for all times.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' In [5], a AN EXISTENCE RESULT FOR HUGHES’ MODEL 3 uniqueness result in the same setting as this paper assuming moreover the BV regularity of the solutions is provided.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' And in [23], [15] and [16] one can find numerical studies of the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Proof of existence and unicity for the regularized problem can be found in [12].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' The Hughes’ model is also revisited with different turning curve equation in [10] with numerical simulation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' In this paper, the authors introduce a regularization by convolution of the density named the subjective density.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We also use the same type of idea when applying our main result in the case of a general cost function c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' The only general (with respect to the choice of the initial data) existence result is contained in [5], where solutions with BVloc regularity away from the turning curve were constructed via a well-chosen many-particle approximation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' The result of [5] for problem (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='4) is limited to the case of an affine cost c(ρ) = 1 + αρ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Our result for the original setting (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='4) will also be limited to the affine cost case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' But we provide a shorter and less specific argument, compared to the many-particle approximation of [5], also we require fewer assumptions on the velocity profile v compared to [5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' The fixed-point approach we develop appears to be rather flexible since it permits to handle several models of the form (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We also adapt the arguments to more realistic, in the setting of crowd evacuation, exit behavior of the “capacity drop” kind (cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' [8, 7]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' However, we highlight the fact that our approach is restricted to situations where Lipschitz continuity of the turning curve ξ is guaranteed for the model at hand, which appears to be a strong restriction on its applicability;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' this restriction also appears in [5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Abstract framework and general results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' In this paper we propose an existence result elabo- rated through a fixed-point argument to problem (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='6) under abstract assumptions on I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Roughly speaking, we require that I maps any admissible solution ρ of the equation (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='6a) to a Lipschitz continuous turning curve ξ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Furthermore, the Lipschitz constant of those turning curves must be uniformly bounded for any ρ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We stress that the Hughes’ model with affine cost c(ρ) = 1 + αρ enters our abstract framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' However, it is not clear whether, for general costs satisfying (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='5), the required Lipschitz bounds hold true.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' This issue for the original Hughes’ model is left for further investigation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Models with more regular dependence of ξ on ρ can be considered as well, including memory and relaxation effects, and for these models the Lipschitz continuity of ξ is justifiable for general costs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' First, let’s introduce some notations that will be used throughout the whole paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We denote {x < ξ(t)} := {(t, x) ∈ [0, T ] × R s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' x < ξ(t)}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Analogously, we use {x = ξ(t)} and {x > ξ(t)}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' For any r > 0, we write BW 1,∞(0, r) := � ξ ∈ W 1,∞((0, T ), R) s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' ∥ ˙ξ∥∞ + ∥ξ∥∞ ≤ r � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Analogously, we write BL1(0, r) for the set of ρ ∈ L1((0, T ) × R, [0, 1]) such that ∥ρ∥L1((0,T )×R) ≤ r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' In problem (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='6), ρ is taken as an admissible solution to the discontinuous flux LWR equation (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='6a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' On the way of proving the existence result, we propose and use a slightly simpler notion of admissible solution for this equation than the notion used in [13], [2] and [1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Those notions of solution are equivalent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Let ξ ∈ W 1,∞((0, T )).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Let ρ0 ∈ L1(R, [0, 1]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Let f be a concave positive flux such that f(0) = 0 = f(1) and F(t, x, ρ) := sign(x − ξ(t))f(ρ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We say that ρ ∈ L1((0, T ) × R, [0, 1]) is an admissible solution to: (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='7) �ρt + F(t, x, ρ)x = 0 ρ(t = 0, ·) = ρ0(·) if For all φ ∈ C∞ c ((0, T ) × R), (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='8) �� Ω ρφt + F(t, x, ρ)φx dt dx = 0 For all positive φ ∈ C∞ c ({x < ξ(t)} (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' φ ∈ C∞ c ({x > ξ(t)}) ), for all k ∈ [0, 1], (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='9) − �� Ω |ρ − k| φt + q(ρ, k)φx dt dx − � R |ρ0 − k|φ(0, x) dx ≤ 0 4 B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' ANDREIANOV, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' GIRARD where we set (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='10) q(u, v) := sign(u − v) [F(t, x, u) − F(t, x, v)] Note that the notion of solution makes sense for arbitrary initial datum ρ0 ∈ L1(R, [0, 1]) but in order to keep consistency with the standard Hughes’ setting, we will restrict our attention to data ρ0 that fulfill (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Remark 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Note that in the above definition, no admissibility condition is prescribed at {x = ξ(t)}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Only the conservativity (the Rankine-Hugoniot condition following from (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='8)) is required at the location of the turning curve.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Remark 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1 implies that ρ ∈ C0([0, T ], L1(R)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' This is proved by an adapted version of the one in [9].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Such an adapted proof can be found in [21].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Remembering this fact makes sense of the notation ρ(t, ·) without ambiguity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' For a given (and fixed) ξ ∈ W 1,∞((0, T )), it is shown this notion of solution gives a well-posed discontinuous flux conservation law in L1((0, T ) × R) when ρ0 belongs to L1(R;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' [0, 1]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We then define the solver operator: (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='11) S0 : � W 1,∞((0, T )) −→ L1((0, T ) × R) ξ �→ ρ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' This operator S0 maps ξ a turning curve to S0(ξ) = ρ the unique admissible, in the sense of Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1, solution to (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='6a)-(1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='6b) set up in the whole one-dimensional space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Remark 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' The uniqueness of a solution in the sense of Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1 still holds for F(t, x, p) := 1{x<ξ(t)}fL(p) + 1{x>ξ(t)}fR(p) where fL (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' fR) is a convex negative (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' concave positive) flux such that fL(0) = fL(1) = fR(0) = fR(1) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' These are the core properties of the fluxes on which rely our proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' For instance, modeling a slanted corridor, we can consider fL,R(ρ) := vL,R ρ(1−ρ) where vL and vR are positive constants accounting for the difference in speed for a pedestrian when moving to the right or the left exit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We now present the notion of solution used for the generalized Hughes’ model given by system (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Recalling Remark 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='3, it makes sense for the operator equation (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='6c) to be verified for all t ∈ [0, T ].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' In fact, we will require that ξ ∈ W 1,∞((0, T )) in order to obtain our main result.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We then use the classical embedding result to identify ξ with a unique element of C0([0, T ]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Consider I : L1((0, T ) × R) −→ C0([0, T ]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We say that (ρ, ξ) is a solution to generalized Hughes’ model (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='6) if ρ is a solution to (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='6a)-(1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='6b) in the sense of Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1 and moreover, the equality ξ = I(ρ) holds in C0([0, T ]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Notice that such a solution can be seen as a fixed point of the composed operator S0◦I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' In order to prove the existence of a solution, we prove a variant of the Schauder’s fixed point Theorem (see [25]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' To be specific, denoting by I : ρ �→ ξ the operator that serves to compute the interface and by D : ξ �→ ρ the one that serves to compute the density, we prove the following statement: Lemma 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Let (X, ∥·∥X) be a Banach space, (Y, ∥·∥Y ) a metric space and K a compact subset of Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Take D : (K, ∥ · ∥Y ) −→ (X, ∥ · ∥X) a continuous operator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Assume there exists B a bounded closed convex subset of X such that: I : (B, ∥ · ∥X) −→ (K, ∥ · ∥Y ) is a continuous operator (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='12a) D ◦ I(B) ⊂ B (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='12b) Then D ◦ I admits a fixed point in B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Remark 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We stress that the assumption (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='12a) implies that, on the subset B, I takes its values in K, making D ◦ I well-defined on B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' AN EXISTENCE RESULT FOR HUGHES’ MODEL 5 The assumptions of Lemma 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='6 permit us to formulate sufficient conditions for the existence of a solution in the sense of Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Specifically, the use of the sets BW 1,∞(0, r) (as K) and C0([0, T ]) (as Y ) is the key to the application of Schauder fixed-point argument to S0 ◦ I under reachable assumptions on I in the Hughes’ model framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We prove in Section 2 the following proposition saying that S0 is continuous.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' This continuity matches with the one required for the operator D in the above lemma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Proposition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Let ρ0 verify (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' If f satisfies the non-degeneracy condition: (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='13) meas � x ∈ [−∥ρ∥∞;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' |ρ∥∞] s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' f ′(x) = 0 � = 0 then the solver operator S0 : (W 1,∞((0, T ), ∥ · ∥∞) −→ (L1((0, T ) × R), ∥ · ∥L1((0,T )×R)) is continuous.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Combining previous results, we state the main result of this paper: Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Let ρ0 verify (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Let B a convex closed bounded subset of L1((0, T ) × R) and I : (B, ∥ · ∥L1((0,T )×R)) −→ (C0([0, T ], R), ∥ · ∥∞) be a continuous operator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Assume that f verifies (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='13).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' If there exists r > 0 such that: I(B) ⊂ BW 1,∞(0, r) (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='14a) ∀ξ ∈ BW 1,∞(0, r), the unique admissible solution to ρt + [sign(x − ξ(t))f(ρ)]x = 0 is in B (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='14b) then there exists (ρ, ξ) a solution to the problem (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='6) in the sense of Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Remark 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' One can interpret B as the set where one looks for solutions to (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='6a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' The central point in order to use this theorem is to construct the set B;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' in below applications, two different choices for B are encountered.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We search for properties of admissible solution in the sense of Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1 that are independent of ξ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' These properties, included in the construction of B must guarantee that I(B) verifies (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='14a) but also that B is convex, bounded and closed in L1((0, T ) × R).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' In this subsection, we present three applications of Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' First, we consider the operator I0 associated to the problem (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='4b) with affine cost function (further detailled in Section 3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Let us exhibit the construction of B1 a set satisfying the conditions (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='14b)-(1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='14a) for this choice of I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Notice that, thanks to the L1-contraction property of the admissible solution ρ that is justified within the uniqueness proof in Section 2, we have: ∀t ∈ [0, T ], ∥ρ(t, ·)∥L1(R) ≤ ∥ρ0∥L1(R) ⇒ ∥ρ∥L1([0,T ]×R) ≤ T ∥ρ0∥L1(R) (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='15) Furthermore, we prove that for a certain fixed constant C > 0 (which value will be made precise later), for any ξ ∈ W 1,∞, a weak solution to (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='6a) in the sense (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='8) verifies (see Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='2 and also [5]): (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='16) ∀a, b ∈ R, ∀s, t ∈ [0, T ], ����� � b a ρ(t, x) − ρ(s, x) dx ����� ≤ C|t − s|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Finally, considering an inital datum 0 ≤ ρ0 ≤ 1, we set: (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='17) B1 = � ρ ∈ BL1(0, T ∥ρ0∥L1) s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' 0 ≤ ρ ≤ 1 and ρ verifies (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='16) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Applying Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='9 with B1 given by (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='17) we get: Proposition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Assume that I0 : B1 −→ C0([0, T ], R) is the operator associated with equation (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='4b) with affine cost c(ρ) = 1 + αρ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' If f verifies (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='13), then there exists (ρ, ξ) a solution to the problem (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='4) in the sense of Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' 6 B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' ANDREIANOV, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' GIRARD As a second case, we treat Iδ the operator associated with a modified version of equation (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='4b) where ρ is replaced by an average density over recent past in equation (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='4b) (see (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='4b’)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' This modification is inspired by the use of “subjective density” in pedestrian and traffic flows, proposed, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=', in [10] and [8, 7] (cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Section 4 where subjective densities are used to model constrained evacuation at exits);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' this choice introduces inertia effect into agents’ perception of the crowd densities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' In that setting, we can prove that the image of Iδ is contained in a bounded subset of W 1,∞((0, T )) without requiring the property (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='16).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Consequently, we recover the global existence result for any cost c verifying (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='5) with the set B2 merely given by: B2 = � ρ ∈ BL1(0, T ∥ρ0∥L1) s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' 0 ≤ ρ ≤ 1 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' As a third example, we consider �Iǫ the operator associated with problem (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='4b) with a relaxed equilibrium, modeling, in a way different from Iδ, inertia effect of the interface dynamics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' In this case, the set B2 also satisfies all the conditions in order to apply Corollary 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Finally, another series of applications (which is an extension of all the previous results to models with different, phenomenologically relevant behavior of agents in exits) is provided in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Outline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' In Section 2, we prove the main results of this paper, respectively Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='9 and Lemma 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='6, Proposition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' These proofs hold in an abstract framework where the choice of I and B are not prescribed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Then, in Section 3, we detail the construction involving the set B1 satisfying the assumptions of Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='9 in the case of I0 being the operator associated with equation (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='4b) with affine cost.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We also discuss the case of a general cost satisfying (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='5) and solve it for the modified operators Iδ and �Iǫ using the set B2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Eventually, in Section 4, we extend Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='9 in a situation with constrained evacuation at exits x = ±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Proof of the main result.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We first deduce Lemma 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='6 from the Schauder fixed-point theorem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Proof of Lemma 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We recall that, thanks to condition (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='12a), D ◦ I is well defined.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' What’s more, D and I are continuous.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' So D ◦ I is continuous from B into itself.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Take any subset A of B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' The set I(A) ⊂ K is a relatively compact set in (Y, ∥ · ∥Y ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Since D is continuous from (K, ∥ · ∥Y ) into (X, ∥ · ∥X), D ◦ I(A) is a relatively compact subset of X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We consequently have D ◦ I a compact operator from B into itself.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Furthermore B is bounded closed convex subset of a Banach space X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We apply Schauder fixed-point theorem (see [25]) and conclude to the existence of a fixed point in B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' In order to apply Lemma 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='6 with D = S0 the solver associated with the notion of solution of Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1 ( see (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='11) ), we first need to check that S0 is well defined from W 1,∞((0, T )) into L1((0, T ) × R) when ∥ρ0∥L1(R) < +∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' This is equivalent to well-posedness for the problem (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='7).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We prove below that, thanks to the particular choice of fluxes on each side of the turning curve (emphasized in Remark 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='4), Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1 is restrictive enough to grant uniqueness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' This notion of solution is however less restrictive than the one proposed in [13, 1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' It implies that both notions are equivalent, also the existence of such solutions is then directly inherited from the proof found in [1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Note that one can prove the existence result for our notion of solution through the convergence of a finite volume scheme (we do so in Section 4, in the context of flux-limited exit behavior at the exits x = ±1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Let ρ,ˆρ be two entropy solutions in the sense of Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1 with initial datum ρ0 (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' ˆρ0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Let Lf be the lipschitz constant of f.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' If ξ ∈ W 1,∞((0, T )), we have: for a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' t ∈ [0, T ], ∀a, b ∈ R, � b a |ρ(t, x) − ˆρ(t, x)|dx ≤ � b+Lft a−Lft |ρ0(x) − ˆρ0(x)|dx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' In particular, there exists at most one entropy solution associated to a given initial datum ρ0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' In order to prove this Theorem, we introduce notation for the right and left strong traces of ρ along a Lipschitz curve ξ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Let ξ ∈ W 1,∞((0, T ), R).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Then, γLρ(t) ∈ L∞((0, T )) (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' γRρ(t) ) is such that, for any φ ∈ C0([0, 1]), ess lim ǫ→0+ 1 ǫ � T 0 � ξ(t) ξ(t)−ǫ |φ(ρ(t, x)) − φ(γLρ(t))| dx dt = 0 AN EXISTENCE RESULT FOR HUGHES’ MODEL 7 � respectively, ess lim ǫ→0+ 1 ǫ � T 0 � ξ(t)+ǫ ξ(t) |φ(ρ(t, x)) − φ(γRρ(t))| dx dt = 0 � The existence of those traces is proven in [24].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Remark 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Generalization of the approach of the present paper to general cost function c, for the original Hughes’ model, may require going below the Lipschitz regularity of ξ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' In this respect, let us point out that extension of the above uniqueness claim to W 1,1 regularity of ξ is feasible, while weakening the regularity of ξ even more presents a serious difficulty for the theory of discontinuous-flux conservation laws [4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Proof of Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Remembering Remark 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='4 and for a more comprehensive presentation of the proof, we denote fR = f and fL = −f.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' To main idea of the proof consists of using Kruzkhov’s doubling variable technique (see [14]) on each side of the curve {x = ξ(t)}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Since ξ is Lipschitz continuous we can join both pieces getting left and right traces along this turning curve, following the general approach as in [4, 8].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We get, for any φ ∈ D+, (∗) − �� Ω |ρ − ˆρ|φt + q(ρ, ˆρ)φx ≤ � T 0 φ(t, ξ(t)) [qR(γRρ, γRˆρ) − qL(γLρ, γLˆρ)] where qL,R(ρ, ˆρ) := sign(ρ − ˆρ) � fL,R(ρ) − fL,R(ˆρ) − ˙ξ(t)(ρ − ˆρ) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' On another side, using traces’ existence, we also recover from (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='8) the Rankine-Hugoniot condition: (∗∗ρ) for a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' t ∈ (0, T ), fR(γRρ(t)) − ˙ξ(t)γRρ(t) = fL(γLρ(t)) − ˙ξ(t)γLρ(t) We also have the analogous relation for ˆρ that we denote (∗∗ˆρ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Fix t ∈ (0, T ) such that (∗∗ρ) and (∗∗ˆρ) are true.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We denote the set of values for γLρ (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' γRρ) that verify (∗∗ρ): ΓL,R := � a ∈ R s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' ∃b ∈ R, fL,R(a) − ˙ξ(t)a = fL,R(b) − ˙ξ(t)b � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Due to the particular choice of the pair of fluxes (fL, fR), those sets are non-empty.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Its geometries are pictured below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' ΓR ΓL y = fL(x) − ˙ξ(t)x y = fR(x) − ˙ξ(t)x Recalling the properties of fL and fR emphasized in Remark 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='4 and using the signs of f ′ L and f ′ R, we let the reader verify that, for any ˙ξ(t), x �→ fR(x) − ˙ξ(t)x has the same monotonicity on ΓR as x �→ fL(x) − ˙ξ(t)x on ΓL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Consequently, if (γLρ, γRρ) verifies (∗∗ρ) and (γL ˆρ, γRˆρ) verifies (∗∗ˆρ), sign(γRρ − γRˆρ) sign � fR(γRρ) − fR(γRˆρ) − ˙ξ(t)(γRρ − γRˆρ) � = sign(γLρ − γL ˆρ) sign � fL(γLρ) − fL(γL ˆρ) − ˙ξ(t)(γLρ − γLˆρ) � (∗∗ρ)-(∗∗ˆρ) implies that fR(γRρ) − fR(γRˆρ) − ˙ξ(t)(γRρ − γRv) = fL(γLρ) − fL(γLˆρ) − ˙ξ(t)(γLρ − γLˆρ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' 8 B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' ANDREIANOV, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' GIRARD Therefore we have: for a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' t ∈ (0, T ), qR(γRρ, γRˆρ) − qL(γLρ, γLˆρ) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Consequently, from (∗), we recover the global Kato’s inequality: for any φ ∈ D+(Ω), − �� |ρ − ˆρ|φt + q(ρ, ˆρ)φx ≤ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' The remaining arguments are identical to the classical framework of Kruzkhov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Integrating on the trapezoid 1[0,t](s)1[a−Lf(t−s),b+Lf(t−s)](x), Lf being the Lipschitz constant of f, we get the localized L1 contraction property: (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1) � b a |ρ(t, x) − ˆρ(t, x)|dx ≤ � b+Lft a−Lft |ρ(0, x) − ˆρ(0, x)|dx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Consequently, the solver operator S0 is well defined from W 1,∞((0, T )) into L1((0, T )×R).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' In order to apply Lemma 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='6 with D = S0 : � W 1,∞((0, T )), ∥ · ∥∞ � −→ � L1((0, T ) × R), ∥ · ∥L1((0,T )×R) � , we also show the continuity of this operator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Let’s denote for any a < b ∈ R, s < t ∈ [0, T ], the trapezoid: (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='2) T s,t a,b := � (τ, x) ∈ (0, T ) × R s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' τ ∈ [s, t], x ∈ (a + (τ − s)Lf , b − (τ − s)Lf) � , where Lf is the Lipschitz constant of f.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We isolate the following useful lemma that comes from (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Let ρ0 satisfy (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='3), ξ ∈ W 1,∞((0, T )) and ρ be the entropy solution in the sense of Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1 to (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='7) on (0, T ) × R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Denote ˆρ the Kruzhkov entropy solution on (s, t) × R to 1 � ˆρt + f(ˆρ)x = 0 ˆρ(s, ·) = ρ(s, ·)1(a,b)(·).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Then, for any a < b ∈ R, s < t ∈ [0, T ], there holds (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='3) T s,t a,b ⊂ {x > ξ(t)} =⇒ ρ = ˆρ a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' on T s,t a,b .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' This lemma immediatly follows from (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We now prove Proposition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='8 using this lemma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Proof of Proposition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Consider (ξn)n∈N and ξ ∈ W 1,∞((0, T )) such that ∥ξn − ξ∥∞ −→ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We denote ρn := S0(ξn).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Let K a compact subset of {x > ξ(t)}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Let ǫ > 0 such that K ⊂ {x > ξ(t) + ǫ}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We cover K by a finite number of trapezoids of the form (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Without loss of generality we can suppose that each trapezoid is contained in {x > ξ(t) + ǫ}: K ⊂ � i∈I T si,ti ai,bi ⊂ {x > ξ(t) + ǫ} , Card(I) < +∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Since ∥ξn − ξ∥∞ −→ 0, for any ǫ > 0, there exists n0 ∈ N such that ∀t ∈ [0, T ], n ≥ n0 ⇒ |ξn(t) − ξ(t)| ≤ ǫ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' This implies ξn(t) ∈ [ξ(t) − ǫ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' ξ(t) + ǫ].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Then, ∀x ∈ R\\[ξ(t) − ǫ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' ξ(t) + ǫ] , sign(x − ξn(t)) = sign(x − ξ(t)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='4) Then, for such a n0, for any n ≥ n0, each trapezoid T si,ti ai,bi ⊂ {x > ξn(t)}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Using Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='3, for any n ≥ n0, ρn is equal almost everywhere in T si,ti ai,bi to the Kruzhkov entropy solution of: � ρt + f(ρ)x = 0 ρ(si, ·) = ρn(si, ·)1(ai,bi)(·).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' 1Here ρ(s, ·) is understood in view of s being a Lebesgue’s point of ρ ∈ L∞((0, T), L1(R)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Recalling Remark 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='3, this is in fact true for any s ∈ [0, T].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' AN EXISTENCE RESULT FOR HUGHES’ MODEL 9 We are now in a position to apply the averaging compactness lemma (see Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1 in [19]) on the trapezoid T s0,t0 a0,b0 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We get a subsequence (ρnk)k∈N that converges in L1(T s0,t0 a0,b0 ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We then apply the averaging compactness lemma with (ρnk)k on T s1,t1 a1,b1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Repeating this process for each i ∈ I, we recover a subsequence (ρnj)j that converges in L1(� i∈I T si,ti ai,bi ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Then (ρnj)j converges in L1(K).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' To conclude, we point out that this reasoning holds for any K ⊂ {x > ξ(t)}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' This is also true for compact subsets of {x < ξ(t)}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Since ξ is Lipschitz, meas({x = ξ(t)}) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Consequently there exists a subsequence (ρnk) that converges almost everywhere on (0, T ) × R and in L1 loc((0, T ) × R).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Moreover, we have ρnk −→ ρ in L1((0, T ) × R) because for [a, b] ∩ [−1, 1] = ∅, ρn = 0 on T 0,T a,b , due to the choice of ρ0 verifying (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Now, ρ is actually S0(ξ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Indeed, recall that ρ has no admissibility condition to satisfy on {x = ξ(t)} beyond the Rankine-Hugoniot relation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Then, we can pass to the limit in the entropy inequalities (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='9) (where, for n large enough, the support of the test function does not intersect the curve {x = ξn(t)} for t ∈ [0, T ]) and pass to the limit in (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='8) by dominated convergence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' This reasoning can be reproduced for any subsequence of (ρn)n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Thanks to a classical argument of compacity, if any converging subsequence (S0(ξnk))k∈N converges to S0(ξ), the whole sequence (S0(ξn))n converges in L1 to S0(ξ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' So S0 : (W 1,∞((0, T )), ∥ · ∥∞) −→ (L1((0, T ) × R), ∥ · ∥L1((0,T )×R)) is continuous.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We now combine all the previous results to get existence of a solution in the sense of Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Proof of Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Suppose there exists r > 0 such that (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='14a)-(1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='14b) are verified.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Using the notations of Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='6 we take: Y = (C0([0, T ]), ∥ · ∥∞) X = (L1((0, T ) × R), ∥ · ∥L1((0,T )×R)) K as the compact set of C0([0, T ]) obtained as the image of BW 1,∞(0, r) under the standard embedding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Using Proposition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='8 and Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1, we know that S0 : (K, ∥ · ∥Y ) −→ (X, ∥ · ∥X) is well defined and continuous.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Further, notice that condition (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='14a) is equivalent to (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='12a) and that condition (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='14b) implies (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='12b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We are now in a position to use Lemma 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We conclude to the existence of a solution to (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='6) in the sense of Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Lipschitz continuity of the turning curve: examples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' In this section, we will enumerate examples of the abstract problem (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='6) \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 ρt + [sign(x − ξ(t))f(ρ)]x = 0 ρ(0, x) = ρ0(x) ξ = I(ρ), where we can construct a set B such that the prescribed operator I satisfies the required properties in order to apply Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='9;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' this includes the original Hughes’ model (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='4) with affine costs and its modifications, taking into account time-inertia effects and allowing for general costs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Note that further examples, with modified exit conditions, are considered in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' For such examples, we exhibit the construction of this set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Consequently, we get existence of a solution in the sense of Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='5 in those situations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Hughes’s model with affine cost.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We first consider the model (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='4): \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 ρt + [sign(x − ξ(t))ρv(ρ)]x = 0 � ξ(t) −1 c(ρ(t, x))dx = � 1 ξ(t) c(ρ(t, x))dx, with initial datum satisfying (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='3) where we choose, for some α > 0, (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='3) c(p) = 1 + αp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' First, let us recall the definition of the set B1 constructed in the introduction: (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='17) B1 = � ρ ∈ BL1(0, T ∥ρ0∥L1) s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' 0 ≤ ρ ≤ 1 and ρ verifies (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='16) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' 10 B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' ANDREIANOV, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' GIRARD In this setup, we have the following proposition: Proposition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Assume the cost is given by (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Then the following properties hold: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' For any ξ ∈ W 1,∞((0, T )), S0(ξ) ∈ B1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' There exists r > 0 such that, for any ρ ∈ B1, there exists a unique solution ξ ∈ BW 1,∞(0, r) to (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='4b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We denote I0 the operator that maps ρ ∈ B1 to ξ the unique solution to (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='4b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Consequently, this operator is well defined and monovaluated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' I0 : (B1, ∥ · ∥L1((0,T )×R)) −→ (W 1,∞([0, T ]), ∥ · ∥∞) is continuous.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' B1 is closed convex and bounded in L1((0, T ) × R).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Consequently, I0 verifies (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='14a)-(1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='14b) for the set B1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We apply Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='9 and get the desired existence of a solution for the problem (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='4) with affine cost (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' That proves Proposition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' In order to prove of Proposition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1, we rely on two lemmas that we chose to isolate in order to use them in the other examples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Let a, b ∈ R, a < b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Let s, t ∈ [0, T ], s < t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Fix ξ ∈ W 1,∞((0, T )).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We denote ρ a solution in the sense of Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Then, there exists C > 0, independent of a, b, s, t, ξ and ρ, such that: (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='4) ����� � b a ρ(t, x) − ρ(s, x) dx ����� ≤ C|t − s|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We recall that there’s no ambiguity in considering ρ(t, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=') since ρ ∈ C0([0, T ], L1(R)) (see Remark 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Proof of Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Let (κn)n∈N be a mollifier.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We set Ψ(τ, x) := 1[a,b](x)1[s,t](τ) and φ(τ, x) := Ψ ∗ κn(τ, x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Using φ as test function in (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='8), making n −→ +∞ we get: � b a ρ(s, x) − ρ(t, x) dx + � t s F(τ, a, ρ(τ, a)) − F(τ, b, ρ(τ, b)) dτ = 0 Consequently, ����� � b a ρ(t, x) − ρ(s, x) dx ����� ≤ ���� � t s F(τ, a, ρ(τ, a)) − F(τ, b, ρ(τ, b)) dτ ���� ≤ � 2 sup p∈[0,1] |f(p)| � |t − s| Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Let s < t ∈ [0, T ].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Let ξ be a solution to (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='4b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We denote ¯ ξ := min(ξ(t), ξ(s)) and ¯ξ := max(ξ(t), ξ(s)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Then (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='5) 2 |ξ(t) − ξ(s)| ≤ ����� � ¯ ξ −1 c(ρ(t, x)) − c(ρ(s, x)) dx − � 1 ¯ξ c(ρ(t, x)) − c(ρ(s, x)) dx ����� Proof of Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We first treat the case ξ(s) ≤ ξ(t).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We have: � ξ(s) −1 c(ρ(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' x)) dx = � ξ(t) ξ(s) c(ρ(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' x)) dx + � 1 ξ(t) c(ρ(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' x)) dx � ξ(s) −1 c(ρ(t,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' x)) dx = − � ξ(t) ξ(s) c(ρ(t,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' x)) dx + � 1 ξ(t) c(ρ(t,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' x)) dx If we substract both equalities,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' � ξ(t) ξ(s) c(ρ(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' x)) + c(ρ(t,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' x)) dx = � ξ(s) −1 c(ρ(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' x)) − c(ρ(t,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' x)) dx − � 1 ξ(t) c(ρ(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' x)) − c(ρ(t,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' x)) dx AN EXISTENCE RESULT FOR HUGHES’ MODEL 11 On the contrary,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' if ξ(s) ≥ ξ(t),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' with an analogous argument we get: � ξ(s) ξ(t) c(ρ(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' x)) + c(ρ(t,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' x)) dx = � ξ(t) −1 c(ρ(t,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' x)) − c(ρ(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' x)) dx − � 1 ξ(s) c(ρ(t,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' x)) − c(ρ(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' x)) dx Using the fact that c ≥ 1 we get: 2|ξ(t) − ξ(s)| = 2(¯ξ − ¯ ξ) ≤ � ¯ξ ¯ ξ c(ρ(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' x)) + c(ρ(t,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' x)) dx ≤ ����� � ¯ ξ −1 c(ρ(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' x)) − c(ρ(t,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' x)) dx − � 1 ¯ξ c(ρ(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' x)) − c(ρ(t,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' x)) dx ����� We are now ready to prove Proposition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Proof of Proposition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' First, consider ρ0 satisfying (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Using ˆρ = 0 in (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1), we prove that for all t in [0, T ], ∥ρ(t, ·)∥L1(R) ≤ ∥ρ0∥L1(R).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' This readily yields: ∥ρ∥L1([0,T ]×R) ≤ T ∥ρ0∥L1(R).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='15) Combining this result with Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='2, we prove the first assertion of Proposition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Second, fix ρ ∈ B1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We prove existence and uniqueness of ξ ∈ L∞([0, T ]) satisfying (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='4b) for any t ∈ [0, T ].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Let t ∈ [0, T ], we set: Ψ+(a) := � a −1 c(ρ(t, x)) dx, Ψ−(a) := � 1 a c(ρ(t, x)) dx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' One can notice that, because c > 0, Ψ+ is a continuous strictly increasing function, while Ψ− is continuous and strictly decreasing on [−1, 1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Therefore, a �→ Ψ+(a) − Ψ−(a) is continuous, strictly increasing, negative at a = −1 and positive at a = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Consequently, there exists only one ˜a ∈ (−1, 1) such that Ψ+(˜a) = Ψ−(˜a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' This can be done for any t ∈ [0, T ].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Consequently, we get existence and unicity of ξ ∈ L∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We now prove that ξ ∈ W 1,∞([0, T ]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Using Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='3 we get: 2 |ξ(t) − ξ(s)| ≤ ����� � ¯ ξ −1 c(ρ(t, x)) − c(ρ(s, x)) dx − � 1 ¯ξ c(ρ(t, x)) − c(ρ(s, x)) dx ����� ≤ α ����� � ¯ ξ −1 ρ(t, x) − ρ(s, x) dx ����� + α ���� � 1 ¯ξ ρ(t, x) − ρ(s, x) dx ���� And using Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='2, with the choice (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='3) of the cost, we get: 2 |ξ(t) − ξ(s)| ≤ 2αC |t − s| We conclude that taking r = αC, one guarantees that ξ is always in BW 1,∞(0, r).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We now prove the continuity of the operator I0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Let’s consider ρ, ρn ∈ B1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Then, for a given t ∈ [0, T ], using (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='4b) for both ξ := I0(ρ) and ξn := I0(ρn), we recover: � ξ(t) ξn(t) c(ρ) + � ξn(t) −1 c(ρ) − � ξn(t) −1 c(ρn) = � ξn(t) ξ(t) c(ρ) + � 1 ξn(t) c(ρ) − � 1 ξn(t) c(ρn) And rearranging the integrals, we get: 2 � ξ(t) ξn(t) c(ρ) = � 1 −1 [c(ρ) − c(ρn)] sign(x − ξn(t)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' 12 B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' ANDREIANOV, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' GIRARD Notice that � T 0 |ξ − ξn| ≤ � T 0 ����� � ξn(t) ξ(t) c(ρ) ����� ≤ 1 2 � T 0 ���� � 1 −1 sign(x − ξn(t)) [c(ρ) − c(ρn)] ���� ≤ 1 2 � T 0 � 1 −1 |c(ρ) − c(ρn)| ≤ α 2 � T 0 � 1 −1 |ρ − ρn| .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Consequently, if ∥ρ − ρn∥L1((0,T )×R) −→ 0, ∥ξ − ξn∥L1((0,T )) −→ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We recall, that ξ, ξn ∈ I0(B1) are r-Lipschitz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' On any open subset of [0, T ] there exists a point t where the continuous function ξ(·) − ξn(·) is less or egal to its L1-average.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Using the fact that [0, T ] can be covered by a finite ǫ-network and that the derivative of ξ(·) − ξn(·) is bounded on this network, we recover that ∥ξ − ξn∥∞ −→ 0 when ∥ρ − ρn∥L1((0,T )×R) −→ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' This proves the third point of Proposition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Eventually, let ρ1, ρ2 ∈ B1, λ ∈ [0, 1];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' it is readily checked that λρ1 + (1 − λ)ρ2 still satisfies (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Then B1 is convex.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' It is also readily checked that we can pass to the L1((0, T ) × R) limit in (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='4), proving that B1 is closed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' By construction B1 is bounded.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' That ends the proof of Proposition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' The general cost case evaluated for a subjective density.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' In the same setup (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='4), let’s further prospect the situation for a cost function c verifying (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Most of the items of Proposition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1 hold with the set B1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' The first point is independent of the nature of c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' The third point proof still holds with general cost if the second point holds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Proof of existence and unicity of ξ ∈ L∞((0, T )) is still valid.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' In fact, the main issue lies in proving that ξ is Lipschitz for any ρ in a given set B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' In order to explore this issue, let’s start from Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='3 estimate (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='5): 2 |ξ(t) − ξ(s)| ≤ ����� � ¯ ξ −1 c(ρ(t, x)) − c(ρ(s, x)) dx − � 1 ¯ξ c(ρ(t, x)) − c(ρ(s, x)) dx ����� Recall that c satisfies (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We set ¯α := esssupu∈[0,1] c′(u), ¯α := essinfu∈[0,1] c′(u) > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Using the negative and positive parts of (ρ(t, ·) − ρ(s, ·)), rearranging the terms we get the following estimate: 2 |ξ(t) − ξ(s)| ≤ � ¯α + ¯α 2 � ����� � ¯ ξ −1 ρ(t, x) − ρ(s, x) dx − � 1 ¯ξ ρ(t, x) − ρ(s, x) dx ����� + � ¯α − ¯α 2 � � 1 −1 |ρ(t, x) − ρ(s, x)| dx =: I1 + I2 (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='6) The first term I1 of the right member is controlled by the estimate of Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' The issue lies in controlling the second term I2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' This suggests that, in order to prove that ξ ∈ W 1,∞((0, T )) we need an estimate of the modulus of continuity of ρ as an element of C0([0, T ], L1(R)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' While the standard Oleinik regularizing effect can be used locally away from the turning curve (see [5]), in a vicinity of the turning curve the spatial variation of ρ may not be controlled;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' moreover, (ir)regularity of the turning curve itself impacts the modulus of continuity of ρ, making it an open question how to control time variations of ρ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We leave this issue for future research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' However, we can treat a natural modification of problem (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='4) for which the method applied for the affine cost (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='3) extends to general costs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Let R : L1((−∞, T )) −→ L1((0, T )) be the operator defined by: (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='7) R[ρ(·, x)](t) := δ � t −∞ ρ(s, x)e−δ(t−s) ds To make this operator well defined, we extend ρ by ρ(t) = ρ0 for any t ∈ [−∞, 0].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' This model corresponds to a memory effect in individual’s perception of the density;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' R[ρ] is a subjective density perceived by an agent AN EXISTENCE RESULT FOR HUGHES’ MODEL 13 making decision to move towards the most appropriate exit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Thus, we consider the problem: (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='4a) (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='4b’) \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 ρt + [sign(x − ξ(t))ρv(ρ)]x = 0 � ξ(t) −1 c(R[ρ(·, x)](t))dx = � 1 ξ(t) c(R[ρ(·, x)](t))dx, with c verifying (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='5), and with initial datum satisfying (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Equation (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='4b’) takes into account the average density over the recent past instead of the instantaneous density at a time t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' This models the bias, due to some inertia of human thinking, towards perception of the density for the pedestrians in the corridor;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' the quantity R[ρ(·, x)] can be compared to other “subjective densities” used in the literature (cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' [10], [8, 7]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' With the same calculations as (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='6), we recover the term I2 = � 1 −1 ���R[ρ(·, x)](t) − R[ρ(·, x)](s) ��� dx, which is controlled by 2δ∥ρ∥L∞|t−s|, a bound for the modulus of continuity of R[ρ(·, x)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' For I1 we can pass the absolute value inside the integral.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Then I1 is also controlled by the modulus of continuity of R[ρ(·, x)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Notice that we don’t need the property (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='16) for this reasoning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Consequently, we define: (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='9) B2 = {ρ ∈ BL1(0, T ∥ρ0∥L1) s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' 0 ≤ ρ ≤ 1} .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Then, Iδ : (B2, ∥ · ∥L1((0,T )×R)) −→ (W 1,∞((0, T )), ∥ · ∥∞), ρ �→ ξ where ξ is defined by (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='4b’) with R given by (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='7), is well defined.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' The analogue of Proposition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1 - where we use Iδ instead of I0, we use B2 instead of B1 and we drop the assumption of affine cost - is easily justified.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' In particular, the proof for the third item of this analogue of Proposition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1 holds with these choices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Thus, without the restriction (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='3) on the cost, we have the following claim: Proposition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Let ρ0 satisfy (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Let c verifying (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Then problem (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='6a)-(1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='6b)-(1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='4b’) admits at least one solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' The general cost case with relaxed equilibrium.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We consider (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='6) with a modified equi- librium equation (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='4b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' This time, we suppose that collective behavior of pedestrians makes appear some amount of inertia in the dynamics of ξ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Fixing ǫ > 0, we consider as a simplest variant of such dynamics the ODE Cauchy problem (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='10a) (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='10b) \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 −ǫ ˙ξ(t) = � 1 ξ(t) c(ρ(t, x))dx − � ξ(t) −1 c(ρ(t, x))dx � 1 ξ(0) c(ρ0(x))dx − � ξ(0) −1 c(ρ0(x))dx = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' for the ρ-driven evolution of the turning curve ξ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Formally, the case ǫ = 0+ corresponds to the standard Hughes’s relation between the density and the turning curve;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' ǫ > 0 models a form of relaxation to the equilibrium given by this standard model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' The primitive form of the Hughes’ model, where the position of the turning curve is determined by an instantaneous Hamilton-Jacobi equation, should be modified to fit this dynamics of the turning curve;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' this modeling issue will be discussed elsewhere.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Proposition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Let ρ ∈ L1((0, T )×R).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Let c verifying the conditions (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' There exists a unique solution ξ to the Cauchy problem (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='10).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Furthermore, ξ is Lipschitz and the Lipschitz constant is independent of ρ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Let’s denote: Ψ(t, a) := 1 ǫ �� 1 a c(ρ(t, x))dx − � a −1 c(ρ(t, x))dx � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Notice that for any a, b ∈ [−1, 1], t ∈ R, |Ψ(t, a) − Ψ(t, b)| ≤ 1 ǫ ����� � b a 2c(ρ(t, x)) dx ����� ≤ 2∥c∥∞ ǫ |a − b|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='11) 14 B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' ANDREIANOV, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' GIRARD We also have, for any ξ such that ∥ξ∥∞ ≤ 1: |Ψ(t, ξ(t))| ≤ 1 ǫ ���� � 1 −1 sign(x − ξ(t))c(ρ(t, x)) dx ���� ≤ 2∥c∥∞ ǫ So Ψ is Lipschitz with respect to the a variable and uniformly bounded with respect to the t variable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We apply the Cauchy-Lipschitz Theorem and recover that there exists a unique local solution to the Cauchy problem (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='10).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Using (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='11), we recover that the solution is global on [0, T ] and that ξ is Lipschitz;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' moreover, the Lipschitz constant of ξ does not depend on ρ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Remark 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' From Proposition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='5, it follows that �Iǫ : L1((0, T ) × R, [0, 1]) −→ W 1,∞((0, T )) that maps any to ρ to the unique ξ solution to (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='10) is well defined.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Proposition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Let ρ1, ρ2 ∈ L1((0, T ) × R).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Let’s denote ξ1,2 := �Iǫ(ρ1,2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Then, (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='12) ∥ξ1 − ξ2∥∞ ≤ ∥c′∥∞ ǫ exp �2T ∥c∥∞ ǫ � ∥ρ1 − ρ2∥L1((0,T )×(−1,1)) Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We denote ξ0 the unique solution to (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='10b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Then,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' for any t ∈ [0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' T ]: ξ1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='2 = ξ0 − � t 0 Ψ1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='2(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' ξ1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='2(s)) ds Then,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' writing ∨,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' ∧ for min,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' max,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' repsectively,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' we make the following calculations: ξ2(t) − ξ1(t) = � t 0 Ψ1(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' ξ1(s)) − Ψ2(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' ξ2(s)) ds = 1 ǫ � t 0 �� ξ1(s) −1 c(ρ1(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' x)) dx − � 1 ξ1(s) c(ρ1(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' x)) dx − � ξ2(s) −1 c(ρ2(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' x)) dx + � 1 ξ2(s) c(ρ2(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' x)) dx � ds = 1 ǫ � t 0 �� (ξ1∨ξ2)(s) −1 c(ρ1(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' x)) − c(ρ2(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' x)) dx ± � (ξ1∧ξ2)(s) (ξ1∨ξ2)(s) c(ρ1(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' x)) + c(ρ2(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' x)) dx + � 1 (ξ1∧ξ2)(s) c(ρ2(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' x)) − c(ρ1(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' x)) dx � ds And consequently,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' |ξ1(t) − ξ2(t)| ≤ 1 ǫ � t 0 � (ξ1∧ξ2)(s) (ξ1∨ξ2)(s) c(ρ1(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' x)) + c(ρ2(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' x)) dx ds + 1 ǫ � t 0 � 1 −1 |c(ρ1(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' x)) − c(ρ2(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' x))| ds dx =: J1 + J2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' For the term J2 we can use the Lagrange inequality denoting ∥c′∥∞ := supp∈[0,1] |c′(p)|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We get: J2 ≤ ∥c′∥∞ ǫ ∥ρ1 − ρ2∥L1((0,T )×(−1,1)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' For the the term J1, notice that, thanks to the cost conditions (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='5), for any s ∈ [0, t], 2|ξ1(s) − ξ2(s)| ≤ � (ξ1∧ξ2)(s) (ξ1∨ξ2)(s) c(ρ1(s, x)) + c(ρ2(s, x)) dx ≤ 2∥c∥∞|ξ1(s) − ξ2(s)| AN EXISTENCE RESULT FOR HUGHES’ MODEL 15 Consequently for any s ∈ [0, T ], there exists β(s) ∈ [2 , 2 ∥c∥∞] such that � (ξ1∧ξ2)(s) (ξ1∨ξ2)(s) c(ρ1(s, x)) + c(ρ2(s, x)) dx = β(s)|ξ1(s) − ξ2(s)|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Then β ∈ L∞((0, T )) ⊂ L1((0, T )).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We are now in a position to use Gronwall’s inequality with integrable coefficients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' That inequality still holds without the continuity of β if we use the Lebesgue differentiation Theorem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We thus reach to |ξ1(t) − ξ2(t)| ≤ � t 0 β(s) ǫ |ξ1(s) − ξ2(s)| ds + ∥c′∥∞ ǫ ∥ρ1 − ρ2∥L1 which yields the subsequent estimates |ξ1(t) − ξ2(t)| ≤ ∥c′∥∞ ǫ ∥ρ1 − ρ2∥L1 exp �� t 0 β(s) ǫ ds � , ∥ξ1 − ξ2∥∞ ≤ ∥c′∥∞ ǫ exp �2T ∥c∥∞ ǫ � ∥ρ1 − ρ2∥L1 Remark 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' One can check that, in the relaxed equilibrium setting, we never used any property of ρ apart from the universal bounds 0 ≤ ρ ≤ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Consequently, in this case we also use: (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='9) B2 = {ρ ∈ BL1(0, T ∥ρ0∥L1) s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' 0 ≤ ρ ≤ 1} Here’s the final result in this relaxed equilibrium setting: Proposition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Let ρ0 satisfy (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Let c verifying (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Then problem (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='6a)-(1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='6b)-(3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='10) admits at least one solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We only have to apply Corollary 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='9 with B2 as a B set and check that, using Propositions 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='5 and 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='7, all the assumptions on �Iǫ are satisfied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Hughes’ model with constrained evacuation at exit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' In this section, we illustrate the robust- ness of our approach by modifying the Hughes model at the level of boundary conditions for the density, allowing for the realistic feature of capacity drop (see [8, 7] and references therein).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We consider the following dynamics for ρ introduced in [8] on the basis of the theory of [11, 3]: (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1a) (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1b) (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1c) (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1d) \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 ρt+ [sign(x − ξ(t))f(ρ)]x = 0 f(ρ(t, 1)) ≤ g �� 1 σ w1(x)ρ(t, x) dx � f(ρ(t, −1)) ≤ g �� −σ −1 w−1(x)ρ(t, x) dx � ρ(0, ·) = ρ0(·).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' The equations (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1b)-(4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1c) prescribe the behaviour at exits situated at x = ±1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' as in previous sections, we set up the conservation law for ρ in the whole space, but the initial condition (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='3) is confined to the domain of interest (−1, 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' The flux f(ρ) of pedestrian going through the exits is limited by respective constraints (we take a common nonlinearity g for the sake of conciseness, but it is straightforward to extend the setting distinguishing g1 and g−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' This flux limiter g depends non locally of ρ(t, ·) and of a weight w supported in a vicinity of length 1 − σ around the exits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' This type of constraint models the well-known phenomenon of capacity drop which, in extreme situations, corresponds to a panic behaviour at exits located at x = ±1, as discussed in [8] and [7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' This model, allowing to consider constrained evacuation at exits, is phenomenologically more relevant than the model with open-end condition considered above (and it includes the previous model, for the trivial choice g ≡ max[0,1] f, see Remark 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' As an example, this constrained evacuation model is able to reproduce the “Faster is Slower” effect at exits (see [7]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' In the following, we’ll use the results of [7] and adapt them to our framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We use the notations proposed in this paper: 16 B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' ANDREIANOV, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' GIRARD Since f is concave positive such that f(0) = f(1), there exists a ¯ρ ∈ [0, 1] such that f ′(ρ)(¯ρ − ρ) > 0 for a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' ρ ∈ [0, 1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We fix σ ∈ (0, 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' This is the threshold of influence on the exit, meaning that the pedestrian located before x = σ have no influence on the exit congestion at x = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Let us take the strongest assumptions used in [8, 7]: � w1 ∈ W 1,∞((σ, 1], R+) s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' � 1 σ w1 = 1 w−1 ∈ W 1,∞([−1, −σ), R+) s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' � −σ −1 w−1 = 1 (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='2) g ∈ W 1,∞(R+, (0, f(¯ρ)]) is non-increasing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='3) We can now introduce the notion of solution we’ll use for ρ combining the one in [11] and Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1: Definition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Let ξ ∈ W 1,∞((0, T ), (−1, 1)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Let ρ0 ∈ L1(R, [0, 1]) supported in [−1, 1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Let f be a con- cave positive flux such that f(0) = 0 = f(1) and F(t, x, ρ) := sign(x − ξ(t))f(ρ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Let g, ω−1 and ω1 satisfy (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='2)-(4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We say that ρ ∈ L1((0, T ) × R) is an admissible solution to (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1) if: for all φ ∈ C∞ c ((0, T ) × R), (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='4) �� (0,T )×R ρφt + F(t, x, ρ)φx dt dx = 0, moreover, setting Q−1(t) := g �� −σ −1 w−1(x)ρ(t, x) dx � , Q1(t) := g �� 1 σ w1(x)ρ(t, x) dx � , (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='5) there holds: For all positive φ ∈ C∞ − c({x > ξ(t)}), for all k ∈ R, − �� (0,T )×R |ρ − k| φt + q(ρ, k)φx dt dx − 2 � T 0 � 1 − Q1(t) f(¯ρ) � f(k)φ(t, 1) dx − � R |ρ0 − k|φ(0, x) dx ≤ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='6) For all positive φ ∈ C∞ c ({x < ξ(t)}), for all k ∈ R, − �� (0,T )×R |ρ − k| φt + q(ρ, k)φx dt dx − 2 � T 0 � 1 − Q−1(t) f(¯ρ) � (−f(k)) φ(t, −1) dx − � R |ρ0 − k|φ(0, x) dx ≤ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='7) For all positive φ ∈ C∞ supported on [a, b] such that a < −1, 1 < b we have: � T 0 � −1 a ρφt + F(t, x, ρ)φx dt dx ≤ � T 0 Q−1(t)φ(t, −1) dt (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='8a) � T 0 � b 1 ρφt + F(t, x, ρ)φx dt dx ≤ � T 0 Q1(t)φ(t, 1) dt (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='8b) Remark 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' As detailled in [3], equations (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='8) combined with the weak solution property (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='4) imply that for a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' t ≥ 0, f(γ1 L,Rρ(t)) ≤ Q1(t) and −f(γ−1 L,Rρ(t)) ≥ −Q−1(t).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' This corresponds to the expected limited flux condition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Remark 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' One can notice that if for all t ≥ 0, g(t) = f(¯ρ) then the flux is not limited at exits and 1 − Q1(t) f(¯ρ) = 1 − Q−1(t) f(¯ρ) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Then, this definition is exactly Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We have the following results: AN EXISTENCE RESULT FOR HUGHES’ MODEL 17 Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Let ρ0 verify (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Let ξ ∈ W 1,∞((0, T ), (−1, 1)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' There exists a solution to (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1) in the sense of Definition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' The proof of Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='4 is postponed to the Appendix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' It is obtained via a convegent finite volume scheme.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' The details of the scheme and the proof of convergence can be found there.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Using the results from [11], [7], [8] and a partitionning argument we prove a corollary of Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='8: Corollary 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Let ρ0 verify (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Let ξ ∈ W 1,∞((0, T ), (−1, 1)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' There exists at most one solution ρ of (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1) in the sense of Definition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Using Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='4, the solver operator Sg : (W 1,∞((0, T ), (−1, 1)), ∥ · ∥∞) −→ (L1((0, T ) × (−1, 1)), ∥ · ∥L1), that maps any ξ to the unique solution ρ to (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1) is well defined and continuous.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Proof of Corollary 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We use of the classical embedding of W 1,∞( [0, T ], (−1, 1)) into C0([0, T ], (−1, 1)): there exists K a closed segment of (−1, 1) such that ξ ∈ C0([0, T ], K).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We consider (φi)i∈{−1,0,1} a partition of the unity of an open set containing [−1, 1] such that: All the supports are segments and 1 ∈ supp(φ1), −1 ∈ supp(φ−1) and K ⊂ supp(φ0) ⊂ (−1, 1) [supp(φ−1) ∪ supp(φ1)] � K = ∅ Let ρ, ˆρ be two solutions in the sense of Definition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We denote ˆQ1,−1 the constraints associated with ˆρ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Let Ψ ∈ C∞ c ((0, T )×R).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We use the classic Kruzkhov doubling of variables (cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' [14]) in the open subdomains of (0, T ) × R situated between x = −∞ and x = −1, x = −1 and x = ξ(t), x = ξ(t) and x = 1, and finally between x = 1 and x = +∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Then by a limiting procedure analogous to the one employed in the proof of Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1, we obtain the Kato inequality carrying singular terms concentrated on the three curves {x = ξ(t)}, {x = 1} and {x = −1}: − �� (0,T )×(−1,1) |ρ − ˆρ|φt + q(ρ, ˆρ)φx ≤ � T 0 Ψ(t, ξ(t)) (φ0 + φ−1 + φ1) (t, ξ(t)) � q0 R(γRρ, γRˆρ) − q0 L(γLρ, γL ˆρ) � (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='9a) + � T 0 Ψ(t, 1)φ1(t, 1) � q1(γRρ, γRˆρ) − q1(γLρ, γLˆρ) � (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='9b) + � T 0 Ψ(t, −1)φ−1(t, −1) � q−1(γRρ, γRˆρ) − q−1(γLρ, γLˆρ) � , (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='9c) where the left and right traces are taken along their respective curves, and q0 L,R(ρ, ˆρ) := sign(ρ − ˆρ) � fL,R(ρ) − fL,R(ˆρ) − ˙ξ(t) (ρ − ˆρ) � q1(ρ, ˆρ) := sign(ρ − ˆρ) [fR(ρ) − fR(ˆρ)] q−1(ρ, ˆρ) := sign(ρ − ˆρ) [fL(ρ) − fL(ˆρ)] .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Referring to proof of Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1, the integral (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='9a) is zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Using the same argument as the proof of Proposition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='10 in [3], we get: (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='9b) ≤ 2 � T 0 Ψ(t, 1) ���Q1(t) − ˆQ1(t) ��� dt (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='9c) ≤ 2 � T 0 Ψ(t, −1) ���Q−1(t) − ˆQ−1(t) ��� dt As in the proof of Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1, we integrate (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='9) along a trapezoid T 0,t a,b .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Then we use the definition of Q±1, ˆQ±1 with Lg the Lipschitz constant of g to get the following inequality: ∥ρ(t, ·) − ˆρ(t, ·)∥L1((a,b)) ≤ ∥ρ0 − ˆρ0∥L1((a−Lft,b+Lft)) + 2 � t 0 � 1 −1 Lg � 1(−1,−σ)ω−1 + 1(σ,1)ω1 � |ρ − ˆρ| dx ds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' 18 B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' ANDREIANOV, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' GIRARD Eventually, using Holder’s inequality and Gronwall’s Lemma, we get: (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='10) ∥ρ(t, ·) − ˆρ(t, ·)∥L1((a,b)) ≤ ∥ρ0 − ˆρ0∥L1((a−Lft,b+Lft))eCt, where C := 2Lg∥1(−1,−σ)ω−1 + 1(σ,1)ω1∥∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Consequently, there is at most one solution in the sense of Definition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1 associated to a fixed ξ turning curve and an initial datum ρ0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' In order to recover the continuity of the operator Sg we proceed the same way as we proved Proposition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We first cover any compact set contained in {ξ(t) < x < 1} by trapezoids.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Without loss of generality, we can suppose those trapezoids are at distance at least ǫ of the both interfaces {x = ξ(t)} and {x = 1}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Consequently, on any trapezoid, for all n ≥ n0, ρn is a Kruzhkov entropy solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We recover compacity thanks to the averaging compactness lemma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' This reasoning can be reproduced in the three other parts of the domain: {x < −1}, {−1 < x < ξ(t)} and {x > 1}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Then, we can pass to the limit via dominated convergence in equation (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='4) and in all the inequalities (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='6)-(4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='7)-(4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='8).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We conclude the proof with the same classical arguments as the proof of Proposition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' That ends the proof of Corollary 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We are ready to state the main result of this section which is an analog of Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Let ρ0 verify (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Assume that f verifies (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='13).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Let g (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' ω1,−1) satisfy (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='3) (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='2)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Let B a convex closed bounded subset of L1((0, T ) × R) and I : (B, ∥ · ∥L1((0,T )×R)) −→ (C0([0, T ], R), ∥ · ∥∞) be a continuous operator such that ∀ρ ∈ B, ∀t ∈ [0, T ], I[ρ](t) ∈ (−1, 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' If there exists r > 0 such that (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='14a)-(1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='14b) hold, then there exists (ρ, ξ) a solution to the problem (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1)-(1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='6b)-(1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='6c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Here ρ is a solution in the sense of Definition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' In particular, existence is verified for I = I0 (for affine cost) or with I = Iδ or �Iǫ (for general cost verifying (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='5)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Appendix A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Convergence of the finite volume scheme in the constrained case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' In order to prove existence of a solution to (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1) in the sense of Definition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1, we construct a converging finite volume scheme adapted around the fixed turning curve ξ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' At the exits we use an operator splitting method with a scheme for the constraints Q1 and Q−1 as in [7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We now present the scheme used in this setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Let T, J ∈ N such that: (CFL) 2 � ∥f ′∥∞ + ∥ ˙ξ∥∞ � J T ≤ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We construct the following scheme: ∆t = 1 T , tn := n∆t, (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1a) ∆x = 1 J , xj = j∆x, (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1b) sn := 1 ∆t � tn+1 tn ˙ξ(s) ds, s∆(t) := N � 1 1[tn,tn+1)(t)sn, (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1c) ξ∆(t) := ξ(0) + � t 0 s∆(s) ds, ξn = ξ∆(tn).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1d) The discretization (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1c)-(A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1d) of the ξ interface is detailled in [22] Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1 where it is required to construct the adapted mesh.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' For any n, we denote jn the unique element of �−J, J� such that ξn ∈ [xjn, xjn+1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We construct the following mesh: χn j := \uf8f1 \uf8f2 \uf8f3 xj if j ≤ jn − 1 yn if j = jn xj if j ≥ jn + 1 Pn j+1/2 := \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 (χn j , χn j+1) × (tn, tn+1) if j ≤ jn − 2 the trapezoid χn jn−1 χn+1 jn−1 χn+1 jn+1 χn jn if j = jn − 1 the trapezoid χn jn χn+1 jn+1 χn+1 jn+2 χn jn+2 if j = jn (χn j+1, χn j+2) × (tn, tn+1) if j ≥ jn + 1 (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1e) AN EXISTENCE RESULT FOR HUGHES’ MODEL 19 Notice that, thanks to the (CFL) condition, xjn−1 < ξn+1 < xjn+2 so the trapezoids defined above are never reduced to a triangle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We denote Pn j+1/2 (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Pn j+1/2) the bottom (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' top) segment of the tapezoid Pn j+1/2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' However, now that the mesh is modified we have two different partitions for the line t = tn+1: (Pn+1 j+1/2)j∈Z and (Pn j+1/2)j∈Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We define (¯ρn+1 i+1/2)i∈Z corresponding to the values of ρn+1 on (Pn i+1/2)i∈Z and (ρn j+1/2)j∈Z the projection of this values on (Pn j+1/2)j∈Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' ¯ρn+1 j+1/2 = ρn j+1/2 ����Pn j+1/2 ���� − ∆t(f n j+1 − f n j ) ����Pn j+1/2 ���� (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1f) ρn+1 j+1/2 := 1 ����Pn+1 j+1/2 ���� � i∈Z ����Pn+1 j+1/2 � Pn i+1/2 ���� ¯ρn+1 i+1/2 (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1g) ρ∆(t, x) := N � n=0 � j ∈ Z j ̸= jn ± 1 ρn j+1/2 1Pn j+1/2(t, x) (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1h) We now want to define the numerical fluxes (f n j )j∈Z corresponding to the left and right edges of the trape- zoids.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' It is worth noticing that we skipped f n jn+1 when we constructed the mesh.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We first define the non-local constraint approximation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' ρn ∆x(·) = � j∈Z ρn j+1/21[χn j ,χn j+1)(·) (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1i) qn 1 := g1 �� 1 σ ρn ∆x(x)ω1(x) dx � (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1j) qn −1 := g−1 �� −σ −1 ρn ∆x(x)ω−1(x) dx � (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1k) F(ρn j−1/2, ρn j+1/2) = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 min � Godf(ρn j−1/2, ρn j+1/2) , qn 1 � if j − 1 = J max � God−f(ρn j−1/2, ρn j+1/2) , −qn −1 � if j = −J Fn int(ρn j−1/2, ρn j+1/2) if j = jn Godf(ρn j−1/2, ρn j+1/2) if j > jn and j − 1 ̸= J God−f(ρn j−1/2, ρn j+1/2) if j < jn and j ̸= −J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1l) Eventually, we define Fn int as in [6] (see details in Subsections 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='5, 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='3 and 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1): f n L,R(ρ) := ±f(ρ) − snρ ∀(ρL, ρR) ∈ [0, 1]2, ∃k ∈ [0, 1] s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Godf n L(ρL, k) = Godf n R(k, ρR) Fn int(ρn j−1/2, ρn j+1/2) := Godf n L(ρn j−1/2, k) = Godf n R(k, ρn j+1/2) (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1m) Numerical simulations with for this scheme can be found in [6, Sect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1] for the case of open-end condition at exits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We are now in a position to start the proof of convergence, which merely assembles with the help of the partition-of-unity technique of [22, 6] the arguments from [6] (for the inner interface situated at x = ξ(t) and [7] (for the constraints set at x = ±1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Proof of Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' The proof follows the general idea of [22, Sect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' 4], see also [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Since the interfaces {x = −1}, {x = ξ(t)} and {x = 1} are non-intersecting, we isolate them in the supports of a partition of unity φ−1, φ0 and φ1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We fix a test function φ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Taking (the discretization of) the test function φ0φ we can 20 B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' ANDREIANOV, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' GIRARD use the specific result for the Hughes’ model treated in [6, Sect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1] to recover the approximate entropy inequalities satisfied by the discrete solution, with the test function φ0φ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' For test functions φ−1φ and φ1φ, we use in the same way the result of [7, Prop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Summing up the contributions of the three parts of the partition of unity, we obtain approximate entropy inequality for the discrete solution, with arbitrary test function φ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' In addition, the integral weak formulation for the approximate solution follows from the scheme’s conservativity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We use the same compactness argument as in [22, Sect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' We can pass to the limit in the approximate weak formulation and in the approximate entropy inequalities, for the chosen converging subsequence and arbitrary test function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' This allows us to characterize the limit as an entropy solution in the sense of Definition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1 of the problem at hand.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Finally, thanks to the uniqueness proven in Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='5, the whole sequence of discrete solutions converges to the unique solution in the sense of Definition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Acknowledgments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' This paper has been supported by the RUDN University Strategic Academic Leader- ship Program.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' REFERENCES [1] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Amadori and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Di Francesco, The one-dimensional hughes model for pedestrian flow: Riemann—type solutions, Acta Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Sci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Ser.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' B Engl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Ed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=', 32 (2012), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' 259–280.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' [2] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Amadori, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Goatin, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Rosini, Existence results for hughes’ model for pedestrian flows, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Anal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Appl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=', 420 (2014), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' 387–406.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' [3] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Andreianov, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Goatin, and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Seguin, Finite volume schemes for locally constrained conservation laws, Numer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' (Heidelb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' ), 115 (2010), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' 609–645.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' [4] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Andreianov, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Karlsen, and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Risebro, A theory of L1-dissipative solvers for scalarconservation laws with discontinuous flux, Arch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Ration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Mech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Anal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=', 201 (2011), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' 27–86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' [5] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Andreianov, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Rosini, and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Stivaletta, On existence, stability and many-particle approximation of solutions of 1D Hughes model with linear costs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' working paper or preprint, July 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' [6] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Andreianov and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Sylla, Finite volume approximation and well-posedness of conservation laws with moving interfaces under abstract coupling conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' submitted, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' [7] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Andreianov, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Donadello, U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Razafison, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Rosini, Qualitative behaviour and numerical approxi- mation of solutions to conservation laws with non-local point constraints on the flux and modeling of crowd dynamics at the bottlenecks, Mathematical Modelling and Numerical Analysis, 50 (2015), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' 1269–1287.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' [8] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Andreianov, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Donadello, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Rosini, Crowd dynamics and conservation laws with nonlocal constraints and capacity drop, Mathematical Models and Methods in Applied Sciences, 24 (2014), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' 2685–2722.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' [9] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Cancès and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Gallouët, On the time continuity of entropy solutions, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Evol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Equ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=', 11 (2011), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' 43–55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' [10] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Carrillo, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Martin, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='-T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Wolfram, An improved version of the hughes model for pedestrian flow, Mathematical Models and Methods in Applied Sciences, 26 (2016), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' 671–697.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' [11] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Colombo and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Goatin, A well posed conservation law with a variable unilateral constraint, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Differ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Equ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=', 234 (2007), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' 654–675.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' [12] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Di Francesco, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Markowich, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='-F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Pietschmann, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content='-T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Wolfram, On the hughes’ model for pedes- trian flow: The one-dimensional case, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Differ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Equ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=', 250 (2011), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' 1334–1362.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' [13] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' El-Khatib, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Goatin, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Rosini, On entropy weak solutions of hughes model for pedestrian motion, Zeitschrift für angewandte Mathematik und Physik, 64 (2013), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' 223–251.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' [14] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Evans, Partial Differential Equations, Graduate Studies in Mathematics, American Mathematical Society, Provi- dence, RI, May 1998.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' [15] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Goatin and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Mimault, The wave-front tracking algorithm for hughes’ model of pedestrian motion, SIAM J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Sci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=', 35 (2013), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' B606–B622.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' [16] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Gomes and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Velho, On the hughes model and numerical aspects, (2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' [17] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Hughes, A continuum theory for the flow of pedestrians, Transportation Research Part B-methodological, 36 (2002), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' 507–535.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' [18] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Lighthill and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Whitham, On kinematic waves.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' ii.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' a theory of traffic flow on long crowded roads, Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 229 (1955), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' 317–345.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' [19] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Perthame, Kinetic formulation of conservation laws, Oxford Lecture Series in Mathematics and its Applications, Clarendon Press, Oxford, England, Jan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' 2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' [20] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Richards, Shock waves on the highway, Operations research, 4 (1956), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' 42–51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' [21] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Sylla, Influence of a slow moving vehicle on traffic: Well-posedness and approximation for a mildly nonlocal model, Networks and Heterogeneous Media, 16 (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' [22] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Sylla, A lwr model with constraints at moving interfaces, ESAIM: Mathematical Modelling and Numerical Analysis, 56 (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' [23] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Twarogowska, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Goatin, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Duvigneau, Numerical study of macroscopic pedestrian flow models, (2013).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' [24] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Vasseur, Strong traces for solutions of multidimensional scalar conservation laws, Arch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Ration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Mech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Anal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=', 160 (2001), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' 181–193.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' [25] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' Zeidler, Applied functional analysis, Applied mathematical sciences, Springer, New York, NY, 1995 ed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=', Dec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
+page_content=' 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'}
diff --git a/v9AyT4oBgHgl3EQfaffQ/content/2301.00245v1.pdf b/v9AyT4oBgHgl3EQfaffQ/content/2301.00245v1.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..e2f7e3c3e2e56d115e34eafb829adf04a7e89305
--- /dev/null
+++ b/v9AyT4oBgHgl3EQfaffQ/content/2301.00245v1.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f71e4d6acb6464306a2ac78359c596fe2132abbb63e0e2743dbca3bdaa162a4c
+size 2417344
diff --git a/v9AyT4oBgHgl3EQfaffQ/vector_store/index.faiss b/v9AyT4oBgHgl3EQfaffQ/vector_store/index.faiss
new file mode 100644
index 0000000000000000000000000000000000000000..156423a5cafb9a2f0fb95d17ec239281680d8f6a
--- /dev/null
+++ b/v9AyT4oBgHgl3EQfaffQ/vector_store/index.faiss
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:933b05a8caa011dbe97b300205f6b4a19a64315dbdced6281c4a3e8210ddca16
+size 5505069
diff --git a/v9E2T4oBgHgl3EQf2wjd/content/2301.04165v1.pdf b/v9E2T4oBgHgl3EQf2wjd/content/2301.04165v1.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..814d7fcbdc66dff1f1e43447301144b05f58f252
--- /dev/null
+++ b/v9E2T4oBgHgl3EQf2wjd/content/2301.04165v1.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:63698d852ee48295214fc6bd2b60666c26d240e01e3039af3e231fef4263ee74
+size 1040639
diff --git a/v9E2T4oBgHgl3EQf2wjd/vector_store/index.faiss b/v9E2T4oBgHgl3EQf2wjd/vector_store/index.faiss
new file mode 100644
index 0000000000000000000000000000000000000000..0728b6fe25df1ff90edbcc4ddbdfdd349f13957d
--- /dev/null
+++ b/v9E2T4oBgHgl3EQf2wjd/vector_store/index.faiss
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c5929f06328c44194cc28da6bd9ec2a2599292157ea9eeffd9282a5da9b1e8b3
+size 2555949
diff --git a/v9E2T4oBgHgl3EQf2wjd/vector_store/index.pkl b/v9E2T4oBgHgl3EQf2wjd/vector_store/index.pkl
new file mode 100644
index 0000000000000000000000000000000000000000..53756ffe107a404c46abf0473f8a8db2c9ab080a
--- /dev/null
+++ b/v9E2T4oBgHgl3EQf2wjd/vector_store/index.pkl
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:662c3f3aa2ef0a97a7ea0bb207de255d1132ec308ec539baeef0947b6743da1f
+size 85874
diff --git a/vdAzT4oBgHgl3EQfP_uD/content/tmp_files/2301.01193v1.pdf.txt b/vdAzT4oBgHgl3EQfP_uD/content/tmp_files/2301.01193v1.pdf.txt
new file mode 100644
index 0000000000000000000000000000000000000000..1185caeaa90dd76e8094563ade46e09b627ac3b8
--- /dev/null
+++ b/vdAzT4oBgHgl3EQfP_uD/content/tmp_files/2301.01193v1.pdf.txt
@@ -0,0 +1,919 @@
+Springer Nature 2021 LATEX template
+Measuring the diversity of data and metadata in digital
+libraries
+Rafael C. Carrasco1*, Gustavo Candela1 and Manuel Marco-Such1
+1*Departmento de Lenguajes y Sistemas Inform´aticos, Universidad de Alicante, Carretera
+San Vicente del Raspeig s/n, San Vicent del Raspeig, 03690, Alicante, Spain.
+*Corresponding author(s). E-mail(s): carrasco@ua.es;
+Contributing authors: gcandela@ua.es; marco@dlsi.ua.es;
+Abstract
+Diversity indices have been traditionally used to capture the biodiversity of ecosystems by
+measuring the effective number of species or groups of species. In contrast to abundance,
+which
+is
+correlated
+with
+the
+amount
+of
+data
+available,
+diversity
+indices
+provide
+a
+more
+robust indicator on the variability of individuals. These types of indices can be employed
+in
+the
+context
+of
+digital
+libraries
+to
+identify
+trends
+in
+the
+distribution
+of
+topics,
+com-
+pare the lexica employed by different authors or analyze the coverage of semantic metadata.
+Keywords: Metadata, Digital Libraries, Open Data, Collections as Data
+1 Introduction
+Richness, usually defined as the number of species
+present in an ecosystem, provides a limited picture
+of its biodiversity as it weights all groups equally,
+regardless their relative abundances. In contrast,
+diversity indices [5] are numerical estimators that
+measure both richness and evenness by giving
+more relevance to abundant species. They there-
+fore provide an effective number of species which
+is more robust than the sample size, due to the
+smaller contribution of rare, possibly undetected,
+cases.
+As digital libraries become more readily avail-
+able, there is an increasing need to explore which
+bibliometric measures could make their features
+easier to understand. It has been argued [8] that
+diversity indices could effectively disentangle the
+correlation between richness and data volume.
+The purpose of this paper is therefore to analyze
+how diversity indices could assist researchers and
+professionals in evaluating the lexical diversity of
+the content as well as the metadata coverage in
+digital collections.
+As regards textual content, the type-token ratio
+(TTR) has been traditionally employed to mea-
+sure the lexical diversity of documents. The TTR
+is computed as the number of different words
+(types) divided by the number of words (tokens)
+in the text. For example, previous works com-
+pare different approaches, including MLTD [9] and
+vocd [10], to evaluate TTR and its variability
+within a sample. Some researchers [7] have also
+explored whether genres could be characterized by
+specific TTR probability distributions.
+Previous
+research
+has
+suggested
+applying
+diversity indices to evaluate the lexical richness
+of documents
+[6]. But other features of digital
+libraries could also benefit from analysis using
+diversity concepts. For example, the local and
+1
+arXiv:2301.01193v1 [cs.DL] 3 Jan 2023
+
+Springer Nature 2021 LATEX template
+2
+Measuring the diversity of metadata
+temporal variations in the coverage of topics or
+authors could be better examined by computing
+diversity indices, as they are not as sensitive to
+infrequent items which are not representative of
+the collection.
+Let us recall that, in ecology, the true diversity,
+or diversity index of order k for an ecosystem with
+N groups or species, is defined as
+D[k] =
+� N
+�
+n=1
+pk
+n
+�
+1
+1−k
+(1)
+where pn is the probability or relative abundance
+of the n-th class, and the parameter k determines
+the relative weight of frequent versus infrequent
+groups: the larger k is, the less significant rare
+species are.
+There is therefore a family of indices D[k], the
+Shannon index (k = 1) and the Simpson index
+(k = 2) among the most popular [12]. Although
+the parameter k influences the value of the diver-
+sity obtained, the exact choice is not critical when
+the objective is to compare diversities at differ-
+ent locations or time intervals. In particular, when
+addressing digital library data and metadata, k =
+1 becomes a natural choice, as D[1] can be easily
+connected to the entropy of a source [13], defined
+in information theory as
+H = −
+N
+�
+n=1
+pn log pn
+It is thus not difficult to prove that, as k
+approaches 1, one obtains D[1] = exp(H). We also
+note that k = 0 leads to the richness R of the
+sample.
+In this paper we will explore the applicability
+of diversity indices to analyzing data (Section 2)
+and metadata (Section 3) produced by digital
+libraries. Our comparison between libraries will be
+based on linked open data collections [1] published
+by libraries, as they provide an open benchmark.
+2 Lexical diversity
+The number M of entries in its vocabulary, also
+known as the number of token types, provides
+an indication of the lexical diversity of a docu-
+ment. The number of token types depends, how-
+ever, on the document length, and M shows a
+monotonous growth with the number n ≤ N of
+tokens processed, N being the document length
+(see Figure 1). This unbounded growth is con-
+sistent with the well known fact that tokens in
+a collection approximately follow a Zipfean dis-
+tribution [11]. However, this impedes a direct
+comparison of texts based on the size of the
+vocabulary used.
+The number of token types in the plots can
+be accurately approximated by a power function
+Cnα with only two parameters: the scale C and
+the exponent α. The parameters that best fit the
+examples can be found in Table 1, and they have
+been used to draw the lines in Figure 1, which
+closely follow the data points.
+C
+α
+Los pazos de Ulloa
+6.7
+0.68
+Do˜na Perfecta
+6.9
+0.66
+La Galatea
+11.1
+0.59
+Table 1 Optimal parameters for the lines Cnα depicted
+in Figure 1.
+A potential advantage of diversity indices is
+that they consist of a single finite value with
+an intuitive interpretation. The diversity of types
+can be calculated exactly if the underlying prob-
+ability distribution of the vocabulary is known
+(and stationary), but, in practice, the probabili-
+ties must be estimated from a text sample using
+the observed frequencies instead. As the accuracy
+of the estimation increases with the text length,
+the result will converge to the true value as the
+number of tokens grows. In the most common situ-
+ation, however, the sample size is not large enough
+to approximate the asymptotic value: as shown
+in Figure 2, the Shannon diversity index is usu-
+ally still growing when the end of the document is
+reached.
+The diversity plots in Figure 2 call for a sat-
+urating function to model the observed shape. A
+function which has been traditionally used to esti-
+mate biodiversity from samples of variable size [4]
+is the saturating exponential
+∆M1(n) = D (1 − eαn),
+(2)
+which involves only two parameters, the exponent
+α and the asymptotic value D of the diversity
+index.
+
+Springer Nature 2021 LATEX template
+Measuring the diversity of metadata
+3
+Fig. 1 Vocabulary size as a function of the number of tokens read for three novels: Los pazos de Ulloa by Emilia Pardo
+Baz´an, Do˜na Perfecta by Benito P´erez Gald´os and La Galatea by Miguel de Cervantes Saavedra.
+A second traditional asymptotic model [4] for
+species accumulation curves is the two-parameter
+function
+∆M2(n) = D
+n
+n + c.
+(3)
+In our experiments, when models M1 and
+M2 were extrapolated, they usually underesti-
+mated the diversity of larger samples. We there-
+fore investigated additional saturating functions,
+in particular, a generalized quotient of monomials
+∆M3(n) = D
+�n + b
+n + c
+�
+,
+(4)
+and the powered quotient
+∆M4(n) = D
+�
+n
+n + c
+�α
+.
+(5)
+We note that in all models, D is the asymptotic
+value, that is, the true diversity index.
+When ten thousand tokens were used to
+extrapolate the curve for larger values, the results
+showed that model M4 consistently outperformed
+the others (see Figure 3). It can be argued that,
+given the high accuracy of the predictions, the
+extrapolated diversity computed by model M4
+(the value of parameter D) can be used to compare
+the lexical diversity of texts or that of collections
+labeled by author, genre or historical period.
+Our results show that the value predicted with
+model M4 does not depend on the size of the sam-
+ple text. As an illustration, Figure 4 shows the
+lexical diversity of works by a prolific author (Lope
+de Vega) as a function of the text length. The vari-
+ability we found could be associated with the style
+of the work (for example, works with rhyming tend
+to exhibit higher diversity), but the diversity has
+no significant correlation with the length of the
+work (Pearson’s R ≃ −0.08).
+3 Metadata diversity
+3.1 Catalographic records
+Diversity indices can be also employed to ana-
+lyze the catalographic metadata created by dig-
+ital libraries. For example, Figure 5 shows the
+richness and diversity of book authors in the
+catalogs of three libraries which have published
+comprehensive collections of catalographic data
+
+Vocabulary size
+Dona Perfecta
+La Galatea
+14000
+Los pazos de Ulloa
+12000
+thousands of types
+10000
+8000
+6000
+4000
+2000
+0
+20
+40
+60
+80
+100
+120
+thousands of tokensSpringer Nature 2021 LATEX template
+4
+Measuring the diversity of metadata
+Fig. 2 Shannon diversity index for the works presented in figure 1.
+using open licenses: a large library (Library of
+Congress, LoC1), a medium-sized library (Uni-
+versiteitsbibliotheek Gent, UGent2), and a small
+library (Biblioteca Virtual Miguel de Cervantes,
+BVC3).
+The richness and diversity lines show a mono-
+tonic growth over time with no indication that a
+plateau could be reached soon. The smaller ratio
+between diversity and richness for the BVC library
+(about 33%) in comparison to the ratio for the
+LoC and UGent collections (52–54%) is a reflec-
+tion of its narrower scope—the BVC focuses on
+Hispanic literature and history—which shows a
+reduced fraction of the authors providing a vast
+contribution to the catalog. Indeed, the average
+number of items per author in the BVC collection
+is µ = 4.9, while this average is lower for the LoC
+(µ = 2.5) and UGent library (µ = 2.1).
+We also investigated whether the coverage of
+topics in a digital library remains stable, serv-
+ing a specialized audience, or whether it tends to
+1Library of Congress full book records: www.loc.gov/item/
+2020445551
+2University of Gent book records: lib.ugent.be/info/exports
+3Miguel de Cervantes book records: data.cervantesvirtual.
+com/datasets
+cover a wider spectrum. Figure 6 shows the trends
+when the complete descriptor of the subject head-
+ing field is analyzed and when its content is split
+into topical, chronological, geographical, or other
+subdivisions (so that, for example, the descriptor
+Commerce–History becomes two subjects, Com-
+merce and History).
+In the samples analyzed, the variety of sub-
+jects typically shows a constant growth with time,
+both in terms of richness and diversity. However,
+this is not the case for the BVC library when
+the subjects are decomposed into subdivisions.
+This is due, on the one hand, to a more inten-
+sive usage of chronological subdivisions. On the
+other hand, an inspection of the records reveals
+that the library has, after an initial period, pro-
+gressively increased the fraction of content within
+the fields of history and literature (and, remark-
+ably, theater) in Spanish—which now account for
+nearly one third of its content. The BVC has thus
+recently developed into a more specialized library.
+3.2 Linked open data
+Over the last decade, cultural heritage institu-
+tions have moved towards adopting the semantic
+
+Diversity of tokens
+1100
+1000
+900
+ index
+ diversity
+800
+700
+Shannon
+600
+500
+Dona Perfecta
+La Galatea
+400
+Los pazos de Ulloa
+0
+20
+40
+60
+80
+100
+120
+thousands of wordsSpringer Nature 2021 LATEX template
+Measuring the diversity of metadata
+5
+Fig. 3 Predictive power of the models when the initial 10000 tokens are used to identify the optimal parameters.
+
+Diversity of tokens
+Dona Perfecta
+1000
+800
+600
+400
+200
+0
+10000
+20000
+30000
+40000
+50000
+60000
+La Galatea
+Shannon diversity index
+1000
+800
+600
+400
+200
+0
+20000
+40000
+60000
+80000
+100000
+120000
+Los pazos de Ulloa
+M1
+1000
+M2
+M3
+800
+M4
+600
+400
+200
+0
+20
+40
+60
+80
+thousands of wordsSpringer Nature 2021 LATEX template
+6
+Measuring the diversity of metadata
+Fig. 4 Shannon diversity index of books by Lope de Vega.
+web [2] and linked open data concepts by using the
+W3C Resource Description Framework to express
+semantic relationships [16] and the SPARQL [15]
+language to query them. RDF describes resources
+(the content of a library) by categorizing them in
+classes (such as person, work or name) and uses
+properties (such as author) to express relation-
+ships between resources. Both resources and prop-
+erties are identified by URIs (Uniform Resource
+Identifiers): for example, a triple (X, P, Y ) can link
+the identifier of a person X to the identifier of a
+name Y connected by the property P, where the
+meaning of URI P is has name. Analogously, a
+triple of the form (X, rdf : type, Z) declares X to
+belong to class Z.
+Libraries have progressively adapted their cat-
+alogs [14] to facilitate the publication of Linked
+Open Data (LOD) repositories. As shown in
+Table 2, they have used a variety of vocabularies
+for the definition of RDF classes and properties,
+however. The repositories have also been made
+available in various forms, which include pub-
+lic SPARQL endpoints, OAI-PMH interfaces and
+even open-access dump files.4
+In order to test the application of diver-
+sity indices to LOD, data shown in Table 2
+were retrieved from these repositories which dis-
+tribute them with open licenses and via a public
+SPARQL endpoint. We note that these end-
+points may not always reflect the current situation
+of the libraries.5 The harvesting was performed
+with simple scripts,6 such as those presented in
+Appendix A.
+The diversity D and richness R of the resources
+was computed, as well as the diversity to rich-
+ness ratio, which provides an indication of how
+effective the usage of the available tags is. As
+shown in Table 3, some libraries, such as the Aus-
+trian National Library (AT), the National Library
+of Finland (FI) and the Koninklijke Bibliotheek
+(KB) employ vocabularies with a small number of
+4http://www.openarchives.org/pmh
+5For example, as of March 2022, the Europeana SPARQL
+endpoint has not been updated since July 2017.
+6Some repositories implement a timeout limit for the down-
+loads. In such cases, partitioned queries were needed to retrieve
+all the information.
+
+1100
+1000
+Shannon diversity index
+900
+800
+700
+600
+10000
+12500
+15000
+17500
+20000
+22500
+25000
+thousands of wordsSpringer Nature 2021 LATEX template
+Measuring the diversity of metadata
+7
+Fig. 5 Cumulative number of authors and Shannon diversity of the authors in the catalog as a function of the year the
+MARC record entered the catalog.
+Table 2 Linked Open Data repositories published by libraries.
+Institution
+Vocabularies
+URL
+Austrian National Library
+edm bibframe rda
+labs.onb.ac.at/en/dataset/lod
+Biblioteca Nacional de Espa˜na
+frbr
+datos.bne.es
+Biblioteca Virtual M. de Cervantes
+rda
+data.cervantesvirtual.com
+Biblioth`eque nat. de France
+frbr
+data.bnf.fr
+Biblioth`eque nat. du Luxembourg
+xml
+data.bnl.lu
+British National Bibliography
+bibo
+bnb.data.bl.uk
+Europeana
+edm
+pro.europeana.eu/page/sparql
+Deutsche Nationalbibliothek
+bibframe
+www.dnb.de/EN/lds
+Library of Congress
+bibframe
+id.loc.gov
+National Library of Finland
+Schema.org bibframe
+data.nationallibrary.fi
+Koninklijke Bibliotheek
+Schema.org lrm
+data.bibliotheken.nl
+classes and properties. In contrast, the National
+Library of France (BNF) and the National Library
+of Spain (BNE) describe their resources in terms of
+the richer FRBR and RDA vocabularies. The BNF
+also employs a proprietary vocabulary to describe
+the roles of creators which contains over 500 cat-
+egories. Since they are not uniformly used, this
+leads to a lower D/R ratio. The British National
+Bibliography (BNB) is an intermediate case, as it
+essentially employs the BIBO vocabulary which
+contains 33 classes and 88 properties.
+Although there is a moderate positive corre-
+lation between the diversity of classes and the
+
+Authors in the catalogue (LoC)
+1e6
+richness
+3.5
+diversity
+3.0
+2.5
+2.0
+1.5
+1.0
+0.5
+0.0
+1970 1975 1980 1985 1990 1995 2000 2005 2010 2015Authors in the catalogue (UGent)
+richness
+500000
+diversity
+400000
+300000
+200000
+100000
+0
+2000
+2005
+2010
+2015
+2020Authors in the catalogue (BvC)
+richness
+diversity
+17500
+15000
+12500
+10000
+7500
+5000
+2500
+2000
+2005
+2010
+2015
+2020Springer Nature 2021 LATEX template
+8
+Measuring the diversity of metadata
+Fig. 6 Cumulative richness and Shannon diversity index of the subjects in the catalog. Left: complete subject headings.
+Right: subject heading subdivisions. Note the specific scales used for richness.
+diversity of properties employed in each collec-
+tion (see Figure 7), some libraries show a finer
+granularity of classes while other employ a higher
+variety of properties.
+4 Conclusions
+Diversity indices provide a complementary view of
+the variety of the groups in a collection of data. In
+contrast to richness, diversity is more robust than
+the sample size as it gives less weight to classes
+with a smaller number of occurrences.
+When lexical content is analyzed, the diver-
+sity of words approaches an asymptotic value
+which depends on the author and genre of the
+works. This value can be obtained by extrapo-
+lating the observed values with a simple model
+
+Diversity of subject headings (UGent)
+richness / 2
+250000
+diversity
+200000
+150000
+100000
+50000
+0
+2000
+2005
+2010
+2015
+2020Diversity of sh subfields (UGent)
+17500
+richness / 10
+diversity
+15000
+12500
+10000
+7500
+5000
+2500
+0
+2000
+2005
+2010
+2015
+2020Diversity of subject headings (BvC)
+richness / 2
+6000
+diversity
+5000
+4000
+3000
+2000
+1000
+2000
+2005
+2010
+2015
+2020Diversity of sh subfields (BVC)
+richness / 10
+diversity
+700
+600
+500
+400
+300
+200
+100
+2000
+2005
+2010
+2015
+2020Diversity of subject headings (LoC)
+1e6
+richness / 2
+2.00
+diversity
+1.75
+1.50
+1.25
+1.00
+0.75
+0.50
+0.25
+0.00
+1970 1975 1980 1985 1990 1995 2000 2005 2010 2015Diversity of sh subfields (LoC)
+10000
+richness / 50
+diversity
+8000
+6000
+4000
+2000
+0Springer Nature 2021 LATEX template
+Measuring the diversity of metadata
+9
+Resource type
+class
+property
+host
+D
+R
+D/R
+D
+R
+D/R
+AT
+2.1
+5
+0.42
+10.7
+22
+0.48
+BNB
+13.2
+33
+0.40
+26.6
+88
+0.30
+BNE
+3.8
+16
+0.24
+50.9
+189
+0.27
+BNF
+6.9
+26
+0.27
+55.5
+791
+0.07
+BVC
+6.6
+27
+0.24
+32.0
+165
+0.19
+EU
+5.1
+11
+0.46
+37.1
+115
+0.32
+FI
+7.0
+12
+0.59
+17.3
+35
+0.49
+KB
+3.9
+12
+0.32
+14.6
+23
+0.64
+Table 3 Diversity D, richness R and diversity-richness rate D/R of the resources contained in linked open data.
+Fig. 7 Shannon diversity of classes and properties in linked open data published by libraries.
+involving only three free parameters. The extrap-
+olation proves stable with respect to the size of
+the sample.
+As regards metadata, diversity indices can
+be used to visualize the trends, for example, in
+creator or subject coverage. The rate between
+diversity and richness also proves useful to com-
+pare the effective usage of the available descriptors
+(classes and properties) to describe resources in
+the semantic data (linked open data collections)
+published by digital libraries.
+The Python scripts employed for the analy-
+sis included in this paper have been published as
+open-access software in [3].
+Acknowledgments.
+We thank Frank Vande-
+pitte and Patrick Hochstenbach from the Ghent
+University Library for their kind assistance in
+understanding the library catalographic records.
+Appendix A
+SPARQL
+queries
+
+Diversity of linked open data collections
+60
+BNF
+BNE
+50
+diversity of properties
+40
+EU
+BVC
+30
+BNB
+20
+FT
+KB
+AT
+10
+2
+4
+6
+8
+10
+12
+14
+diversity of classesSpringer Nature 2021 LATEX template
+10
+Measuring the diversity of metadata
+Listing 1 Query used to retrieve all classes and the
+number of resources per class in a LOD repository.
+SELECT ? c l a s s
+(COUNT(? s ) AS ?count)
+WHERE {
+? s a ? c l a s s
+}
+GROUP BY ? c l a s s
+Listing 2 Query retrieving external repositories linked
+from a specific LOD repository and the number of links to
+each one.
+SELECT ?hostname (COUNT(? s ) AS ?count)
+WHERE{
+? s owl : sameAs ?same
+.
+bind (
+s t r b e f o r e ( s t r a f t e r (
+s t r (? same ) , ”//” ) , ”/” )
+AS ?hostname )
+}
+GROUP BY ?hostname
+References
+[1] Berners-Lee
+T
+(2006)
+Linked
+data.
+URL
+https://www.w3.org/DesignIssues/
+LinkedData.html
+[2] Berners-Lee T, Hendler J, Lassila O (2001)
+The Semantic Web. Scientific American 284
+[3] Carrasco RC, Candela G, Such MM (2022)
+rccarrasco/dl diversity: Initial release. https:
+//doi.org/10.5281/zenodo.6389967,
+URL
+https://doi.org/10.5281/zenodo.6389967
+[4] Colwell RK, Coddington JA (1994) Estimat-
+ing terrestrial biodiversity through extrapola-
+tion. Philosophical Transactions of the Royal
+Society of London Series B: Biological Sci-
+ences 345(1311):101–118. https://doi.org/10.
+1098/rstb.1994.0091, URL https://doi.org/
+10.1098/rstb.1994.0091
+[5] Hill MO (1973) Diversity and evenness: A
+unifying notation and its consequences. Ecol-
+ogy 54(2):427–432. https://doi.org/10.2307/
+1934352
+[6] Jarvis
+S
+(2013)
+Capturing
+the
+diversity
+in
+lexical
+diversity.
+Language
+Learning
+63(s1):87–106.
+https://doi.org/10.1111/j.
+1467-9922.2012.00739.x
+[7] Kub´at M, Miliˇcka J (2013) Vocabulary rich-
+ness measure in genres. Journal of Quanti-
+tative Linguistics 20(4):339–349. https://doi.
+org/10.1080/09296174.2013.830552
+[8] Kyle K, Crossley SA, Jarvis S (2021) Assess-
+ing the validity of lexical diversity indices
+using direct judgements. Language Assess-
+ment Quarterly 18(2):154–170. https://doi.
+org/10.1080/15434303.2020.1844205
+[9] McCarthy PM, Jarvis S (2010) MTLD, vocd-
+d, and HD-d: A validation study of sophisti-
+cated approaches to lexical diversity assess-
+ment. Behavior Research Methods 42(2):381–
+392. https://doi.org/10.3758/brm.42.2.381
+[10] McKee G, Malvern D, Richards B (2000)
+Measuring vocabulary diversity using ded-
+icated
+software.
+Literary
+and
+Linguistic
+Computing 15(3):323–338. https://doi.org/
+10.1093/llc/15.3.323
+[11] Piantadosi ST (2014) Zipf’s word frequency
+law in natural language: a critical review and
+future directions. Psychonomic bulletin &
+review 21:1112–30. https://doi.org/10.3758/
+s13423-014-0585-6
+[12] Roswell
+M,
+Dushoff
+J,
+Winfree
+R
+(2021) A conceptual guide to measuring
+species
+diversity.
+Oikos
+130(3):321–338.
+https://doi.org/10.1111/oik.07202,
+URL
+https://onlinelibrary.wiley.com/doi/abs/10.
+1111/oik.07202
+[13] Shannon CE (1948) A mathematical theory
+of communication. The Bell System Techni-
+cal Journal 27(3):379–423. https://doi.org/
+10.1002/j.1538-7305.1948.tb01338.x
+[14] Smith-Yoshimura K (2020) Transitioning to
+the next generation of metadata. https://doi.
+org/https://doi.org/10.25333/rqgd-b343
+[15] World
+Wide
+Web
+Consortium
+(2013)
+SPARQL
+query
+language
+for
+RDF.
+URL
+https://www.w3.org/TR/
+sparql11-overview/
+[16] World Wide Web Consortium (2014) RDF
+1.1 concepts and abstract syntax. URL https:
+
+Springer Nature 2021 LATEX template
+Measuring the diversity of metadata
+11
+//www.w3.org/TR/rdf11-concepts/
+
diff --git a/vdAzT4oBgHgl3EQfP_uD/content/tmp_files/load_file.txt b/vdAzT4oBgHgl3EQfP_uD/content/tmp_files/load_file.txt
new file mode 100644
index 0000000000000000000000000000000000000000..86f009ae6ba8ac1e91ec85f09d1cdc6097937379
--- /dev/null
+++ b/vdAzT4oBgHgl3EQfP_uD/content/tmp_files/load_file.txt
@@ -0,0 +1,386 @@
+filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf,len=385
+page_content='Springer Nature 2021 LATEX template Measuring the diversity of data and metadata in digital libraries Rafael C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' Carrasco1*, Gustavo Candela1 and Manuel Marco-Such1 1*Departmento de Lenguajes y Sistemas Inform´aticos, Universidad de Alicante, Carretera San Vicente del Raspeig s/n, San Vicent del Raspeig, 03690, Alicante, Spain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' Corresponding author(s).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' E-mail(s): carrasco@ua.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='es;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' Contributing authors: gcandela@ua.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='es;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' marco@dlsi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='ua.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='es;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' Abstract Diversity indices have been traditionally used to capture the biodiversity of ecosystems by measuring the effective number of species or groups of species.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' In contrast to abundance, which is correlated with the amount of data available, diversity indices provide a more robust indicator on the variability of individuals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' These types of indices can be employed in the context of digital libraries to identify trends in the distribution of topics, com- pare the lexica employed by different authors or analyze the coverage of semantic metadata.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' Keywords: Metadata, Digital Libraries, Open Data, Collections as Data 1 Introduction Richness, usually defined as the number of species present in an ecosystem, provides a limited picture of its biodiversity as it weights all groups equally, regardless their relative abundances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' In contrast, diversity indices [5] are numerical estimators that measure both richness and evenness by giving more relevance to abundant species.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' They there- fore provide an effective number of species which is more robust than the sample size, due to the smaller contribution of rare, possibly undetected, cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' As digital libraries become more readily avail- able, there is an increasing need to explore which bibliometric measures could make their features easier to understand.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' It has been argued [8] that diversity indices could effectively disentangle the correlation between richness and data volume.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' The purpose of this paper is therefore to analyze how diversity indices could assist researchers and professionals in evaluating the lexical diversity of the content as well as the metadata coverage in digital collections.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' As regards textual content, the type-token ratio (TTR) has been traditionally employed to mea- sure the lexical diversity of documents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' The TTR is computed as the number of different words (types) divided by the number of words (tokens) in the text.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' For example, previous works com- pare different approaches, including MLTD [9] and vocd [10], to evaluate TTR and its variability within a sample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' Some researchers [7] have also explored whether genres could be characterized by specific TTR probability distributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' Previous research has suggested applying diversity indices to evaluate the lexical richness of documents [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' But other features of digital libraries could also benefit from analysis using diversity concepts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' For example, the local and 1 arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='01193v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='DL] 3 Jan 2023 Springer Nature 2021 LATEX template 2 Measuring the diversity of metadata temporal variations in the coverage of topics or authors could be better examined by computing diversity indices, as they are not as sensitive to infrequent items which are not representative of the collection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' Let us recall that, in ecology, the true diversity, or diversity index of order k for an ecosystem with N groups or species, is defined as D[k] = � N � n=1 pk n � 1 1−k (1) where pn is the probability or relative abundance of the n-th class, and the parameter k determines the relative weight of frequent versus infrequent groups: the larger k is, the less significant rare species are.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' There is therefore a family of indices D[k], the Shannon index (k = 1) and the Simpson index (k = 2) among the most popular [12].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' Although the parameter k influences the value of the diver- sity obtained, the exact choice is not critical when the objective is to compare diversities at differ- ent locations or time intervals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' In particular, when addressing digital library data and metadata, k = 1 becomes a natural choice, as D[1] can be easily connected to the entropy of a source [13], defined in information theory as H = − N � n=1 pn log pn It is thus not difficult to prove that, as k approaches 1, one obtains D[1] = exp(H).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' We also note that k = 0 leads to the richness R of the sample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' In this paper we will explore the applicability of diversity indices to analyzing data (Section 2) and metadata (Section 3) produced by digital libraries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' Our comparison between libraries will be based on linked open data collections [1] published by libraries, as they provide an open benchmark.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' 2 Lexical diversity The number M of entries in its vocabulary, also known as the number of token types, provides an indication of the lexical diversity of a docu- ment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' The number of token types depends, how- ever, on the document length, and M shows a monotonous growth with the number n ≤ N of tokens processed, N being the document length (see Figure 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' This unbounded growth is con- sistent with the well known fact that tokens in a collection approximately follow a Zipfean dis- tribution [11].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' However, this impedes a direct comparison of texts based on the size of the vocabulary used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' The number of token types in the plots can be accurately approximated by a power function Cnα with only two parameters: the scale C and the exponent α.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' The parameters that best fit the examples can be found in Table 1, and they have been used to draw the lines in Figure 1, which closely follow the data points.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' C α Los pazos de Ulloa 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='68 Do˜na Perfecta 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='9 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='66 La Galatea 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='59 Table 1 Optimal parameters for the lines Cnα depicted in Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' A potential advantage of diversity indices is that they consist of a single finite value with an intuitive interpretation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' The diversity of types can be calculated exactly if the underlying prob- ability distribution of the vocabulary is known (and stationary), but, in practice, the probabili- ties must be estimated from a text sample using the observed frequencies instead.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' As the accuracy of the estimation increases with the text length, the result will converge to the true value as the number of tokens grows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' In the most common situ- ation, however, the sample size is not large enough to approximate the asymptotic value: as shown in Figure 2, the Shannon diversity index is usu- ally still growing when the end of the document is reached.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' The diversity plots in Figure 2 call for a sat- urating function to model the observed shape.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' A function which has been traditionally used to esti- mate biodiversity from samples of variable size [4] is the saturating exponential ∆M1(n) = D (1 − eαn), (2) which involves only two parameters, the exponent α and the asymptotic value D of the diversity index.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' Springer Nature 2021 LATEX template Measuring the diversity of metadata 3 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' 1 Vocabulary size as a function of the number of tokens read for three novels: Los pazos de Ulloa by Emilia Pardo Baz´an, Do˜na Perfecta by Benito P´erez Gald´os and La Galatea by Miguel de Cervantes Saavedra.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' A second traditional asymptotic model [4] for species accumulation curves is the two-parameter function ∆M2(n) = D n n + c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' (3) In our experiments, when models M1 and M2 were extrapolated, they usually underesti- mated the diversity of larger samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' We there- fore investigated additional saturating functions, in particular, a generalized quotient of monomials ∆M3(n) = D �n + b n + c � , (4) and the powered quotient ∆M4(n) = D � n n + c �α .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' (5) We note that in all models, D is the asymptotic value, that is, the true diversity index.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' When ten thousand tokens were used to extrapolate the curve for larger values, the results showed that model M4 consistently outperformed the others (see Figure 3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' It can be argued that, given the high accuracy of the predictions, the extrapolated diversity computed by model M4 (the value of parameter D) can be used to compare the lexical diversity of texts or that of collections labeled by author, genre or historical period.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' Our results show that the value predicted with model M4 does not depend on the size of the sam- ple text.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' As an illustration, Figure 4 shows the lexical diversity of works by a prolific author (Lope de Vega) as a function of the text length.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' The vari- ability we found could be associated with the style of the work (for example, works with rhyming tend to exhibit higher diversity), but the diversity has no significant correlation with the length of the work (Pearson’s R ≃ −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='08).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' 3 Metadata diversity 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='1 Catalographic records Diversity indices can be also employed to ana- lyze the catalographic metadata created by dig- ital libraries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' For example, Figure 5 shows the richness and diversity of book authors in the catalogs of three libraries which have published comprehensive collections of catalographic data Vocabulary size Dona Perfecta La Galatea 14000 Los pazos de Ulloa 12000 thousands of types 10000 8000 6000 4000 2000 0 20 40 60 80 100 120 thousands of tokensSpringer Nature 2021 LATEX template 4 Measuring the diversity of metadata Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' 2 Shannon diversity index for the works presented in figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' using open licenses: a large library (Library of Congress, LoC1), a medium-sized library (Uni- versiteitsbibliotheek Gent, UGent2), and a small library (Biblioteca Virtual Miguel de Cervantes, BVC3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' The richness and diversity lines show a mono- tonic growth over time with no indication that a plateau could be reached soon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' The smaller ratio between diversity and richness for the BVC library (about 33%) in comparison to the ratio for the LoC and UGent collections (52–54%) is a reflec- tion of its narrower scope—the BVC focuses on Hispanic literature and history—which shows a reduced fraction of the authors providing a vast contribution to the catalog.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' Indeed, the average number of items per author in the BVC collection is µ = 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='9, while this average is lower for the LoC (µ = 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='5) and UGent library (µ = 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' We also investigated whether the coverage of topics in a digital library remains stable, serv- ing a specialized audience, or whether it tends to 1Library of Congress full book records: www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='loc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='gov/item/ 2020445551 2University of Gent book records: lib.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='ugent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='be/info/exports 3Miguel de Cervantes book records: data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='cervantesvirtual.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' com/datasets cover a wider spectrum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' Figure 6 shows the trends when the complete descriptor of the subject head- ing field is analyzed and when its content is split into topical, chronological, geographical, or other subdivisions (so that, for example, the descriptor Commerce–History becomes two subjects, Com- merce and History).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' In the samples analyzed, the variety of sub- jects typically shows a constant growth with time, both in terms of richness and diversity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' However, this is not the case for the BVC library when the subjects are decomposed into subdivisions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' This is due, on the one hand, to a more inten- sive usage of chronological subdivisions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' On the other hand, an inspection of the records reveals that the library has, after an initial period, pro- gressively increased the fraction of content within the fields of history and literature (and, remark- ably, theater) in Spanish—which now account for nearly one third of its content.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' The BVC has thus recently developed into a more specialized library.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='2 Linked open data Over the last decade, cultural heritage institu- tions have moved towards adopting the semantic Diversity of tokens 1100 1000 900 index diversity 800 700 Shannon 600 500 Dona Perfecta La Galatea 400 Los pazos de Ulloa 0 20 40 60 80 100 120 thousands of wordsSpringer Nature 2021 LATEX template Measuring the diversity of metadata 5 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' 3 Predictive power of the models when the initial 10000 tokens are used to identify the optimal parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' Diversity of tokens Dona Perfecta 1000 800 600 400 200 0 10000 20000 30000 40000 50000 60000 La Galatea Shannon diversity index 1000 800 600 400 200 0 20000 40000 60000 80000 100000 120000 Los pazos de Ulloa M1 1000 M2 M3 800 M4 600 400 200 0 20 40 60 80 thousands of wordsSpringer Nature 2021 LATEX template 6 Measuring the diversity of metadata Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' 4 Shannon diversity index of books by Lope de Vega.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' web [2] and linked open data concepts by using the W3C Resource Description Framework to express semantic relationships [16] and the SPARQL [15] language to query them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' RDF describes resources (the content of a library) by categorizing them in classes (such as person, work or name) and uses properties (such as author) to express relation- ships between resources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' Both resources and prop- erties are identified by URIs (Uniform Resource Identifiers): for example, a triple (X, P, Y ) can link the identifier of a person X to the identifier of a name Y connected by the property P, where the meaning of URI P is has name.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' Analogously, a triple of the form (X, rdf : type, Z) declares X to belong to class Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' Libraries have progressively adapted their cat- alogs [14] to facilitate the publication of Linked Open Data (LOD) repositories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' As shown in Table 2, they have used a variety of vocabularies for the definition of RDF classes and properties, however.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' The repositories have also been made available in various forms, which include pub- lic SPARQL endpoints, OAI-PMH interfaces and even open-access dump files.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='4 In order to test the application of diver- sity indices to LOD, data shown in Table 2 were retrieved from these repositories which dis- tribute them with open licenses and via a public SPARQL endpoint.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' We note that these end- points may not always reflect the current situation of the libraries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='5 The harvesting was performed with simple scripts,6 such as those presented in Appendix A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' The diversity D and richness R of the resources was computed, as well as the diversity to rich- ness ratio, which provides an indication of how effective the usage of the available tags is.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' As shown in Table 3, some libraries, such as the Aus- trian National Library (AT), the National Library of Finland (FI) and the Koninklijke Bibliotheek (KB) employ vocabularies with a small number of 4http://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='openarchives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='org/pmh 5For example, as of March 2022, the Europeana SPARQL endpoint has not been updated since July 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' 6Some repositories implement a timeout limit for the down- loads.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' In such cases, partitioned queries were needed to retrieve all the information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' 1100 1000 Shannon diversity index 900 800 700 600 10000 12500 15000 17500 20000 22500 25000 thousands of wordsSpringer Nature 2021 LATEX template Measuring the diversity of metadata 7 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' 5 Cumulative number of authors and Shannon diversity of the authors in the catalog as a function of the year the MARC record entered the catalog.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' Table 2 Linked Open Data repositories published by libraries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' Institution Vocabularies URL Austrian National Library edm bibframe rda labs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='onb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='ac.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='at/en/dataset/lod Biblioteca Nacional de Espa˜na frbr datos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='bne.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='es Biblioteca Virtual M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' de Cervantes rda data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='cervantesvirtual.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='com Biblioth`eque nat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' de France frbr data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='bnf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='fr Biblioth`eque nat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' du Luxembourg xml data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='bnl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='lu British National Bibliography bibo bnb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='bl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='uk Europeana edm pro.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='europeana.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='eu/page/sparql Deutsche Nationalbibliothek bibframe www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='dnb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='de/EN/lds Library of Congress bibframe id.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='loc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='gov National Library of Finland Schema.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='org bibframe data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='nationallibrary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='fi Koninklijke Bibliotheek Schema.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='org lrm data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='bibliotheken.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='nl classes and properties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' In contrast, the National Library of France (BNF) and the National Library of Spain (BNE) describe their resources in terms of the richer FRBR and RDA vocabularies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' The BNF also employs a proprietary vocabulary to describe the roles of creators which contains over 500 cat- egories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' Since they are not uniformly used, this leads to a lower D/R ratio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' The British National Bibliography (BNB) is an intermediate case, as it essentially employs the BIBO vocabulary which contains 33 classes and 88 properties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' Although there is a moderate positive corre- lation between the diversity of classes and the Authors in the catalogue (LoC) 1e6 richness 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='5 diversity 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='0 1970 1975 1980 1985 1990 1995 2000 2005 2010 2015Authors in the catalogue (UGent) richness 500000 diversity 400000 300000 200000 100000 0 2000 2005 2010 2015 2020Authors in the catalogue (BvC) richness diversity 17500 15000 12500 10000 7500 5000 2500 2000 2005 2010 2015 2020Springer Nature 2021 LATEX template 8 Measuring the diversity of metadata Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' 6 Cumulative richness and Shannon diversity index of the subjects in the catalog.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' Left: complete subject headings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' Right: subject heading subdivisions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' Note the specific scales used for richness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' diversity of properties employed in each collec- tion (see Figure 7), some libraries show a finer granularity of classes while other employ a higher variety of properties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' 4 Conclusions Diversity indices provide a complementary view of the variety of the groups in a collection of data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' In contrast to richness, diversity is more robust than the sample size as it gives less weight to classes with a smaller number of occurrences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' When lexical content is analyzed, the diver- sity of words approaches an asymptotic value which depends on the author and genre of the works.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' This value can be obtained by extrapo- ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='lating the observed values with a simple model ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='Diversity of subject headings (UGent) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='richness / 2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='250000 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='diversity ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='200000 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='150000 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='100000 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='50000 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='2000 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='2005 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='2010 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='2015 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='2020Diversity of sh subfields (UGent) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='17500 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='richness / 10 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='diversity ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='15000 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='12500 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='10000 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='7500 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='5000 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='2500 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='2000 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='2005 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='2010 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='2015 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='2020Diversity of subject headings (BvC) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='richness / 2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='6000 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='diversity ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='5000 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='4000 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='3000 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='2000 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='1000 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='2000 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='2005 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='2010 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='2015 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='2020Diversity of sh subfields (BVC) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='richness / 10 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='diversity ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='700 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='600 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='500 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='400 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='300 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='200 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='100 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='2000 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='2005 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='2010 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='2015 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='2020Diversity of subject headings (LoC) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='1e6 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='richness / 2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='00 diversity 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='75 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='50 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='25 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='75 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='00 1970 1975 1980 1985 1990 1995 2000 2005 2010 2015Diversity of sh subfields (LoC) 10000 richness / 50 diversity 8000 6000 4000 2000 0Springer Nature 2021 LATEX template Measuring the diversity of metadata 9 Resource type class property host D R D/R D R D/R AT 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='1 5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='42 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='7 22 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='48 BNB 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='2 33 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='40 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='6 88 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='30 BNE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='8 16 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='24 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='9 189 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='27 BNF 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='9 26 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='27 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='5 791 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='07 BVC 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='6 27 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='24 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='0 165 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='19 EU 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='1 11 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='46 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='1 115 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='32 FI 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='0 12 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='59 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='3 35 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='49 KB 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='9 12 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='32 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='6 23 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='64 Table 3 Diversity D, richness R and diversity-richness rate D/R of the resources contained in linked open data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' 7 Shannon diversity of classes and properties in linked open data published by libraries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' involving only three free parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' The extrap- olation proves stable with respect to the size of the sample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' As regards metadata, diversity indices can be used to visualize the trends, for example, in creator or subject coverage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' The rate between diversity and richness also proves useful to com- pare the effective usage of the available descriptors (classes and properties) to describe resources in the semantic data (linked open data collections) published by digital libraries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' The Python scripts employed for the analy- sis included in this paper have been published as open-access software in [3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' Acknowledgments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' We thank Frank Vande- pitte and Patrick Hochstenbach from the Ghent University Library for their kind assistance in understanding the library catalographic records.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' Appendix A SPARQL queries Diversity of linked open data collections 60 BNF BNE 50 diversity of properties 40 EU BVC 30 BNB 20 FT KB AT 10 2 4 6 8 10 12 14 diversity of classesSpringer Nature 2021 LATEX template 10 Measuring the diversity of metadata Listing 1 Query used to retrieve all classes and the number of resources per class in a LOD repository.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' SELECT ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' c l a s s (COUNT(?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' s ) AS ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='count) WHERE { ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' s a ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' c l a s s } GROUP BY ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' c l a s s Listing 2 Query retrieving external repositories linked from a specific LOD repository and the number of links to each one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' SELECT ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='hostname (COUNT(?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' s ) AS ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='count) WHERE{ ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' s owl : sameAs ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='same .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' bind ( s t r b e f o r e ( s t r a f t e r ( s t r (?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' same ) , ”//” ) , ”/” ) AS ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='hostname ) } GROUP BY ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='hostname References [1] Berners-Lee T (2006) Linked data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' URL https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='w3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='org/DesignIssues/ LinkedData.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='html [2] Berners-Lee T, Hendler J, Lassila O (2001) The Semantic Web.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' Scientific American 284 [3] Carrasco RC, Candela G, Such MM (2022) rccarrasco/dl diversity: Initial release.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' https: //doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='5281/zenodo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='6389967, URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='5281/zenodo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='6389967 [4] Colwell RK, Coddington JA (1994) Estimat- ing terrestrial biodiversity through extrapola- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' Philosophical Transactions of the Royal Society of London Series B: Biological Sci- ences 345(1311):101–118.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' 1098/rstb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='1994.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='0091, URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='org/ 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='1098/rstb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='1994.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='0091 [5] Hill MO (1973) Diversity and evenness: A unifying notation and its consequences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' Ecol- ogy 54(2):427–432.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='2307/ 1934352 [6] Jarvis S (2013) Capturing the diversity in lexical diversity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' Language Learning 63(s1):87–106.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='1111/j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' 1467-9922.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='00739.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='x [7] Kub´at M, Miliˇcka J (2013) Vocabulary rich- ness measure in genres.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' Journal of Quanti- tative Linguistics 20(4):339–349.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='1080/09296174.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='830552 [8] Kyle K, Crossley SA, Jarvis S (2021) Assess- ing the validity of lexical diversity indices using direct judgements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' Language Assess- ment Quarterly 18(2):154–170.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='1080/15434303.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='1844205 [9] McCarthy PM, Jarvis S (2010) MTLD, vocd- d, and HD-d: A validation study of sophisti- cated approaches to lexical diversity assess- ment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' Behavior Research Methods 42(2):381– 392.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='3758/brm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='381 [10] McKee G, Malvern D, Richards B (2000) Measuring vocabulary diversity using ded- icated software.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' Literary and Linguistic Computing 15(3):323–338.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='org/ 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='1093/llc/15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='323 [11] Piantadosi ST (2014) Zipf’s word frequency law in natural language: a critical review and future directions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' Psychonomic bulletin & review 21:1112–30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='3758/ s13423-014-0585-6 [12] Roswell M, Dushoff J, Winfree R (2021) A conceptual guide to measuring species diversity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' Oikos 130(3):321–338.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='1111/oik.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='07202, URL https://onlinelibrary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='wiley.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='com/doi/abs/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' 1111/oik.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='07202 [13] Shannon CE (1948) A mathematical theory of communication.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' The Bell System Techni- cal Journal 27(3):379–423.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='org/ 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='1002/j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='1538-7305.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='1948.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='tb01338.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='x [14] Smith-Yoshimura K (2020) Transitioning to the next generation of metadata.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' org/https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='25333/rqgd-b343 [15] World Wide Web Consortium (2013) SPARQL query language for RDF.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' URL https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='w3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='org/TR/ sparql11-overview/ [16] World Wide Web Consortium (2014) RDF 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='1 concepts and abstract syntax.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content=' URL https: Springer Nature 2021 LATEX template Measuring the diversity of metadata 11 //www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='w3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
+page_content='org/TR/rdf11-concepts/' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'}
diff --git a/w9FRT4oBgHgl3EQfhDe7/content/2301.13582v1.pdf b/w9FRT4oBgHgl3EQfhDe7/content/2301.13582v1.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..61e65e3acf4b75e89884601d2405f0ff28aaa772
--- /dev/null
+++ b/w9FRT4oBgHgl3EQfhDe7/content/2301.13582v1.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8533c0c2b36eb7d68343b1b2644a1f1d94ae9d77b28dbf8b96175965a7bc004a
+size 535491
diff --git a/ytFKT4oBgHgl3EQfMC3E/vector_store/index.faiss b/ytFKT4oBgHgl3EQfMC3E/vector_store/index.faiss
new file mode 100644
index 0000000000000000000000000000000000000000..365966038175f8eafd533fde03dfaf19e99328ec
--- /dev/null
+++ b/ytFKT4oBgHgl3EQfMC3E/vector_store/index.faiss
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0a48b0ff441a22c6547eef930845ccc9c96aa35fb422edf4312ecf767f34a5a8
+size 6815789
diff --git a/zNAyT4oBgHgl3EQf0vni/content/tmp_files/2301.00725v1.pdf.txt b/zNAyT4oBgHgl3EQf0vni/content/tmp_files/2301.00725v1.pdf.txt
new file mode 100644
index 0000000000000000000000000000000000000000..74ef5295bfababef8e02a3adaddd0618552f964a
--- /dev/null
+++ b/zNAyT4oBgHgl3EQf0vni/content/tmp_files/2301.00725v1.pdf.txt
@@ -0,0 +1,2672 @@
+1
+Learning Invariance from Generated Variance
+for Unsupervised Person Re-identification
+Hao Chen, Yaohui Wang, Benoit Lagadec, Antitza Dantcheva, Francois Bremond
+Abstract—This work focuses on unsupervised representation learning in person re-identification (ReID). Recent self-supervised
+contrastive learning methods learn invariance by maximizing the representation similarity between two augmented views of a same
+image. However, traditional data augmentation may bring to the fore undesirable distortions on identity features, which is not always
+favorable in id-sensitive ReID tasks. In this paper, we propose to replace traditional data augmentation with a generative adversarial
+network (GAN) that is targeted to generate augmented views for contrastive learning. A 3D mesh guided person image generator is
+proposed to disentangle a person image into id-related and id-unrelated features. Deviating from previous GAN-based ReID methods
+that only work in id-unrelated space (pose and camera style), we conduct GAN-based augmentation on both id-unrelated and
+id-related features. We further propose specific contrastive losses to help our network learn invariance from id-unrelated and id-related
+augmentations. By jointly training the generative and the contrastive modules, our method achieves new state-of-the-art unsupervised
+person ReID performance on mainstream large-scale benchmarks.
+Index Terms—Person re-identification, image synthesis, representation disentanglement, data augmentation, contrastive learning
+!
+1
+INTRODUCTION
+G
+IVEN an image of a target person, a person re-
+identification (ReID) system [1], [2] aims at matching
+images of the same person across non-overlapping cameras.
+With the help of human-annotated labels, supervised per-
+son ReID methods [3], [4] have yielded impressive results.
+However, there usually exist strong domain gaps between
+different domains, such as illumination condition, camera
+property and scenario variation. As shown in previous
+methods [5], [6], a ReID model trained on a specific domain
+is hard to generalize to other domains. One straightforward
+solution is to annotate and re-train the ReID model in a new
+domain, which is cumbersome and time-consuming for real-
+world deployments. Towards an automatic adaptive system,
+unsupervised person ReID [7], [8], [9] has attracted increasing
+attention in the research community. Compared with su-
+pervised counterparts, unsupervised methods directly learn
+from unlabeled images and therefore entail better scalability
+in real-world deployments.
+Recent self-supervised contrastive learning studies [10], [11]
+have shown promising performance in unsupervised repre-
+sentation learning. By maximizing the representation sim-
+ilarity between two different views (augmented versions)
+of a same image, contrastive methods learn representations
+that are invariant to different conditions. In this context, data
+augmentation plays a crucial role in mimicking real-world
+condition variance. Contrastive learning methods are able
+to build more robust representations, given they were pro-
+vided with better augmented views. Previous methods gen-
+erally consider traditional data augmentation techniques,
+•
+H. Chen, Y. Wang, A. Dantcheva and F. Bremond are with Inria
+and Universit´e Cˆote d’Azur, 2004 Route des Lucioles, 06902 Val-
+bonne, France. E-mail: {hao.chen, yaohui.wang, antitza.dantcheva, fran-
+cois.bremond}@inria.fr
+•
+B. Lagadec is with European Systems Integration, 362 Avenue du Cam-
+pon, 06110 Le Cannet, France. E-mail: benoit.lagadec@esifrance.net
+e.g., random flipping, cropping, color jittering, blurring and
+erasing [12]. However, these random augmentation tech-
+niques may cause undesirable distortion to crucial identity
+information. To overcome this issue, we propose to use a
+Generative Adversarial Network (GAN) [13] as an augmen-
+tation substitute, as it is able to disentangle a representation
+into id-related and id-unrelated features (see Table 1). More
+accurate augmented views can be obtained by modifying a
+certain factor while preserving other factors.
+Previous GAN-based unsupervised ReID methods [14],
+[15], [16], [17] often treat unsupervised ReID as an unsu-
+pervised domain adaptation task, which attempts to adapt
+a model trained on a labeled source domain to an unla-
+beled target domain. Under this setting, it is intuitive to
+use GAN-based style transfer [18], [19] to generate source
+domain images in the style of a target domain. A model
+can be re-trained on the generated images in target domain
+style with source domain labels. However, unsupervised
+domain adaptation performance often strongly relies on
+quality and scale of the source domain. Differently, we treat
+unsupervised ReID as a contrastive representation learning
+task, where the source domain is not mandatory. To this end,
+we integrate a generative module and a contrastive module
+into a joint learning framework.
+For the generative module, we propose a 3D mesh based
+generator. Conventional pose transfer methods [20], [21] use
+2D pose [22] to guide generation, not preserving body shape
+information. 3D mesh recovery [23] jointly estimates body
+shape, as well as 3D pose, which conserves more identity
+information for unsupervised ReID. We use 3D meshes
+to guide the generation, where generated images in new
+poses are then used as augmented views in the contrastive
+module.
+For the contrastive module, we use a clustering al-
+gorithm to generate pseudo labels, aimed at maximizing
+representation similarity between different views of a same
+arXiv:2301.00725v1 [cs.CV] 2 Jan 2023
+
+2
+TABLE 1
+Id-related and Id-unrelated factors in a person image.
+Id-related
+Id-unrelated
+cloth color,
+pose, view-point,
+hair color, texture,
+illumination, camera style
+body shape
+background
+pseudo identity. Our model attracts a generated view to
+its original view, while repulsing the generated view from
+images of different identities. The contrastive module per-
+mits an identity encoder to extract view-invariant identity
+features, which, in turn, improves the generation quality.
+In our previous work [9], GAN-based augmentation
+was only conducted on id-unrelated features, which has
+been common practice in previous GAN-based ReID meth-
+ods [20], [24], [25]. Modifying id-unrelated features allows
+for learning identity features that are more invariant to id-
+unrelated variations. In this paper, we explore the possibility
+of conducting GAN-based augmentation on the id-related
+features to further improve the ReID performance. Inspired
+by Mixup [26] that interpolates two images to learn a
+smoother decision boundary between two classes, we pro-
+pose to interpolate disentangled id-related features inside
+the generative module, namely Disentangled Mixup (D-
+Mixup). As shown in Table 2, if two persons P1 and P2 re-
+spectively wear red and yellow clothes, an in-between iden-
+tity in orange clothes should be marked as 0.5P1 + 0.5P2.
+However, in a dataset, such a person in orange clothes is
+normally labeled as a totally different identity P3, which
+hinders a network from learning the accurate relationship
+between different identities. Compared to traditional image-
+level Mixup [26] and feature-level Mixup [27], our proposed
+D-Mixup generates more accurate in-between identity im-
+ages, which are more suitable for fine-grained person ReID.
+In our D-Mixup, we try to make our network understand the
+mixed identity 0.5P1 + 0.5P2 is not related to id-unrelated
+features (pose and view-point), but only related to id-related
+features (cloth color).
+To summarize, our contributions include the following:
+•
+We propose a 3D mesh guided generator to disentan-
+gle representations into id-related and id-unrelated
+features. Two novel data augmentation techniques
+are proposed respectively on id-unrelated and id-
+related features.
+•
+We propose Rotation Contrast and Mixup Contrast
+modules to respectively learn invariance from id-
+unrelated and id-related augmented views.
+•
+We propose an enhanced joint generative and con-
+trastive learning framework. We comprehensively
+investigate how the generative and contrastive mod-
+ules mutually promote each other and contribute to
+unsupervised ReID performance.
+•
+Extensive experiments validate the superiority of
+proposed GAN-based augmentation over traditional
+augmentation for unsupervised person ReID. Our
+method achieves new state-of-the-art unsupervised
+person ReID performance on mainstream image-
+based datasets, including Market-1501, DukeMTMC-
+reID and MSMT17.
+TABLE 2
+Interpolation results between two random persons P1 and P2 with
+image-level Mixup [26], feature-level Mixup (F-Mixup) [27] and our
+proposed disentangled Mixup (D-Mixup). To visualize results from
+F-Mixup, we follow AMR [28] to train a VAE-GAN for mixed image
+reconstruction. Our D-Mixup only interpolates disentangled identity
+features in the generation, which alleviates noise from mixed structural
+features.
+Inputs
+Mixup
+F-Mixup
+D-Mixup
+Image
+Image
+Label
+1.0P1
+0.0P1
+0.5P1
+0.5P1
+0.5P1
+0.5P1
+0.0P2
+1.0P2
+0.5P2
+0.5P2
+0.5P2
+0.5P2
+•
+Our method can be also applied to video-based
+person ReID. Our method significantly outperforms
+previous unsupervised video person ReID methods
+on MARS and DukeMTMC-VideoReID datasets.
+2
+RELATED WORK
+2.1
+Contrastive learning
+Contrastive learning [29] has shown impressive perfor-
+mance for un-/self-supervised representation learning [10],
+[11], [30], [31], [32], [33]. Such contrastive methods target
+at learning representations that are invariant to different
+distortions by attracting positive pairs, while repulsing neg-
+ative pairs. For each image, a positive pair can be constituted
+by two augmented views, whereas all other images in a
+dataset are regarded as negative samples. Contrastive learn-
+ing methods benefit from a set of well defined data aug-
+mentation techniques, which can mimic real-world image
+distortions. For example, MoCo [11] used random cropping,
+color jitterring, horizontal flipping and grayscale conversion
+to obtain positive view pairs. As an extension, MoCo-
+v2 [34] included blurring and stronger color distorsion,
+which enhanced the original method. However, most of
+data augmentation settings in contrastive learning methods
+were designed for general image classification datasets, e.g.,
+ImageNet [35]. These traditional augmentation techniques
+are not always suitable for color-sensitive person ReID,
+especially those that introduce strong color distorsion.
+2.2
+Data augmentation
+As a technique to constitute positive pairs, data augmen-
+tation plays an important role in contrastive learning. Re-
+cently, GAN and Mixup have provided new approaches for
+data augmentation in person ReID.
+2.2.1
+GAN-based augmentaion
+Zheng et al.
+[36] unconditionally generated a lot of un-
+labeled person images with DCGAN [37] to enlarge data
+
+3
+volume for supervised ReID. Following GAN-based meth-
+ods were usually conditionally conducted on some factors
+from Table 1. 1) Pose: With the guidance of 2D poses,
+FD-GAN [20] and PN-GAN [38] generated a target per-
+son in new poses to learn pose-irrelevant representations
+for single-domain supervised ReID. Similar pose transfer
+[21] was then proposed to address unsupervised domain
+adaptive (UDA) ReID. 2) Dataset style (illumination): As a
+dataset is usually recorded in a uniform illumination condi-
+tion, PTGAN [14] and SyRI [15] used CycleGAN [39] to min-
+imize the domain gap between different datasets by generat-
+ing person images in the style of a target domain. 3) Camera
+style: Instead of the general dataset style, CamStyle [24]
+transferred images captured from one camera into the style
+of another camera, in order to reduce inter-camera style
+gaps. Similar method [16] was then applied to UDA ReID. 4)
+Background: SBSGAN [40] and CR-GAN [41] respectively
+were targeted at removing and switching the background of
+a person image to mitigate background influence for UDA
+ReID. 5) General structure: By switching global and local
+level identity-unrelated features, IS-GAN [42] disentangled
+a representation into identity-related and identity-unrelated
+features without any concrete guidance. As a concrete guid-
+ance, a gray-scaled image contains multiple id-unrelated
+factors of a person image, including pose, background
+and carrying structures. By recoloring gray-scaled person
+images with the color distribution of other images, DG-
+Net [25] and DG-Net++ [17] learned disentangled identity
+representations invariant to structure factors. Our proposed
+3D mesh guided generator shares certain similarity with
+pose transfers and DG-Net++. However, both pose transfers
+and DG-Net++ lose body shape information, which can be
+conserved by 3D meshes. Moreover, as opposed to DG-
+Net++, we do not transfer style in a cross-domain manner,
+which allows our method to operate without a source do-
+main.
+2.2.2
+Mixup
+Mixup [26] is a simple yet effective data augmentation
+technique that interpolates two samples and labels into
+one new in-between sample, which encourages a smoother
+decision boundary between two classes. The interpolation
+can be conducted between two images
+[26], [43], two
+feature representations [27] and two portions of different
+images [44]. Initially proposed for supervised image classi-
+fication [26], [43], Mixup has been successfully extended to
+semi-supervised learning [45], [46], unsupervised domain
+adaptation [47], as well as novel class discovery [48]. Aug-
+Mix [49] combines multiple augmented versions of an image
+into a mixed image and proves that such technique can
+enhance robustness on corrupted data. CAIL [50] applies
+image-level Mixup between a source domain image and a
+target domain image to create a between-domain person
+image, which facilitates cross-domain knowledge transfer
+in unsupervised domain adaptive ReID. The above methods
+usually interpolate whole images or whole representations,
+resulting in noise from overlapping person structures. To
+reduce noise from mixed person structures, we propose
+to interpolate only disentangled identity features, which is
+compatible with our proposed 3D mesh guided GAN.
+2.3
+Unsupervised person ReID
+Depending on the necessity of a large-scale labeled source
+dataset, unsupervised person ReID methods can be roughly
+categorized into unsupervised domain adaptive (UDA) and
+fully unsupervised ReID. We note that above mentioned
+GAN-based unsupervised ReID methods [14], [15], [16],
+[17], [21], [41] fall into the setting of UDA ReID. Several
+works [51], [52] leveraged semantic attributes to facilitate
+the domain adaptation. Another prominent approach has to
+do with assigning pseudo labels to unlabeled images and
+conducting pseudo label learning [7], [8], [50], [53], [54],
+[55], [56]. Pseudo labels can be obtained by existing clus-
+tering algorithms, e.g., K-means [8] and DBSCAN [17], [55],
+or newly designed pseudo labelling algorithms [53], [56].
+Since the performance of UDA ReID is highly correlated
+to the scale and quality of a source domain, recent fully
+unsupervised ReID methods have attracted more attention.
+Most of previous fully unsupervised methods [57], [58], [59],
+[60], [61] were based on pure pseudo label learning. Our
+previous method GCL [9] has entailed a hybrid GAN and
+pseudo label learning method, which is compatible with
+both UDA and fully unsupervised settings. We here propose
+a new id-related augmentation D-Mixup, which enhances
+our framework to achieve new state-of-the-art performance
+under both UDA and fully unsupervised settings.
+3
+METHOD
+In this paper, we propose an enhanced joint Generative
+and Contrastive Learning (GCL+) for unsupervised person
+ReID. We define unsupervised ReID as a problem of learn-
+ing invariance from self-augmented variance. As illustrated
+in Fig. 1. (a), the proposed GCL+ constitutes of two modules:
+a generative module that provides GAN-based augmented
+views, as well as a contrastive module that learns invariance
+from augmented views. These two modules are coupled by
+a shared identity encoder. After the joint training, only the
+shared identity encoder is conserved for inference. In the
+following sections, we proceed to provide details related to
+both modules. To facilitate the reading, we include a list of
+abbreviations in Supplementary Materials Section C.
+3.1
+Generative Module
+Our generative module is composed of 4 networks, in-
+cluding an identity encoder Eid, a structure encoder Estr,
+a decoder G and a discriminator D. Given an unlabeled
+person ReID dataset X
+= {x1, x2, ..., xN}, we use the
+prominent algorithm HMR [23] to generate corresponding
+3D meshes, which are then used as structure guidance in
+the generative module. By recoloring a specific 3D mesh
+to reconstruct a real image, a person representation can
+be disentangled into identity and structure features. We
+conduct data augmentation in two pathways: one on id-
+unrelated structure features with rotated meshes, the other
+one on identity features with D-Mixup.
+3.1.1
+Mesh-guided Rotation (id-unrelated augmentation)
+As shown in Fig. 1. (b), given a person image and an
+estimated 3D mesh, we denote the 2D projection of the
+mesh as original structure sori. To mimic real-world camera
+
+4
+(a) General Architecture of GCL
+𝑥𝑚𝑖𝑥
+′
+𝑥𝑛𝑒𝑤
+′
+Contrastive
+Module
+(b) Generative Module: ID-unrelated augmentation
+Generative
+Module
+𝑥
+𝑠𝑛𝑒𝑤
+𝐸𝑠𝑡𝑟
+𝑠𝑜𝑟𝑖
+𝐸𝑖𝑑
+𝑥𝑖
+𝐺
+𝑥𝑗
+𝐸𝑖𝑑
+mix
+𝑥𝑚𝑖𝑥
+′
+𝐷
+𝐿𝑎𝑑𝑣
+(c) Generative Module: ID-related augmentation
+Discriminator
+𝐸𝑖𝑑
+𝐸𝑠𝑡𝑟
+𝐺
+𝐷
+Shared identity encoder
+Structure encoder
+Decoder
+𝐿
+Loss
+mix
+Mixup
+𝐸𝑖𝑑
+𝐸𝑠𝑡𝑟
+𝐺
+𝑠𝑜𝑟𝑖
+𝑥
+𝑥𝑜𝑟𝑖
+′
+𝐷
+𝐸𝑠𝑡𝑟
+𝐺
+𝑠𝑛𝑒𝑤
+𝑥𝑛𝑒𝑤
+′
+𝐸𝑖𝑑
+𝐷
+𝐿𝑓𝑒𝑎𝑡
+𝐿𝑖𝑚𝑔
+𝐿𝑎𝑑𝑣
+𝐿𝑎𝑑𝑣
+𝐺
+𝑥𝑜𝑟𝑖
+′′
+𝐷
+𝐿𝑖𝑚𝑔
+𝐿𝑎𝑑𝑣
+𝐿𝑓𝑒𝑎𝑡
+𝐸𝑖𝑑
+(d) Contrastive Module: Rotation Contrast
+𝑥𝑛𝑒𝑤
+′
+memory
+𝑓𝑝𝑜𝑠
+𝐿𝑣𝑖
+′
+𝑥
+𝐿𝑣𝑖
+𝐸𝑖𝑑
+𝐿𝑣𝑖
+′′
+𝐸𝑖𝑑
+𝑥𝑖
+𝑥𝑗
+(e) Contrastive Module: Mixup Contrast
+1 2 3
+mix
+Pseudo label
+𝐿𝑚𝑖𝑥
+1 2 3
+1 2 3
+𝑥𝑚𝑖𝑥
+′
+Fig. 1. (a) General architecture of GCL+: The framework is composed of a generative module (b, c) and a contrastive module (d, e), which
+are coupled by the shared identity encoder Eid. (b) Mesh rotation (id-unrelated augmentation) : The decoder G combines the identity features
+encoded by Eid and structure features Estr to generate an augmented view x′
+new with a cycle consistency. (c) D-mixup (id-related augmentation):
+The decoder G generates a identity-mixed augmented view x′
+mix with the mixed identity features. (d) Rotation Contrast: Viewpoint-invariance is
+enhanced by maximizing the agreement between original Eid(x), synthesized Eid(x′
+new) and memory fpos representations. (e) Mixup Contrast:
+A smoother decision boundary can be learnt with x′
+mix and the interpolated pseudo label.
+view-point, as shown in Table 3, we rotate the 3D mesh
+by 45°, 90°, 135°, 180°, 225°, 270° and 315° and randomly
+take one 2D projection from these rotated meshes as a
+new structure snew. The unlabeled image is encoded to
+identity features by the identity encoder Eid : x → fid,
+while both original and new structures are encoded to
+structure features by the structure encoder Estr : sori →
+fstr(ori), snew → fstr(new). Combining both identity and
+structure features, the decoder generates synthesized im-
+ages G : (fid, fstr(ori)) → x′
+ori, (fid, fstr(new)) → x′
+new,
+where a prime is used to represent generated images.
+As we do not have real images in new structures (paired
+data), a cycle consistency reconstruction [39] becomes in-
+dispensable for the generative module. We encode the
+generated image in the new structure x′
+new and decode
+once again to get synthesized images in original structures
+G(Eid(x′
+new), sori) → x′′
+ori, where double primes denote
+cycle-generated images. We calculate a ℓ1 image reconstruc-
+tion loss between the original image x, the generated image
+x′
+ori and the cycle-generated image:
+Limg = E[∥x − x′
+ori∥1] + E[∥x − x′′
+ori∥1].
+(1)
+To enhance the disentanglement in the cycle consistency
+reconstruction, we also calculate a ℓ1 feature reconstruction
+loss:
+Lfeat = E[∥fid − Eid(x′
+new)∥1]+
+E[∥fid − Eid(x′′
+ori)∥1].
+(2)
+The discriminator D attempts to distinguish between
+real and generated images with adversarial losses:
+Ladv = E[log D(x) + log(1 − D(x′
+ori))]+
+E[log D(x) + log(1 − D(x′
+new))]+
+E[log D(x) + log(1 − D(x′′
+ori))].
+(3)
+Remark. As shown in Fig. 2, we can switch 2D gray
+images [17], [25], switch meshes between random persons
+or rotate one’s own mesh to introduce new structures as
+generation guidance. Although stronger pose and view-
+point variances can be introduced into generation, random
+
+5
+TABLE 3
+Examples of 3D mesh guided generation on Market-1501 dataset.
+Each mesh is rotated by 45°, 90°, 135°, 180°, 225°, 270° and 315°.
+0°
+45°
+90°
+135°
+180°
+225°
+270°
+315°
+→
+→
+→
+→
+switching hinders conservation of body shape information.
+After testing, we find that the most appropriate way to
+preserve body shape and generate accurate images is Mesh
+rotation, which yields higher performance in Table 4.
+3.1.2
+D-mixup (id-related augmentation)
+As shown in Fig. 1. (c), given two random person images xi
+and xj in a mini-batch, we encode the images into identity
+features Eid(xi) → fid(i) and Eid(xj) → fid(j). We follow
+the original Mixup [26] in using a Beta distribution with a
+hyper-parameter α to randomly sample a mixing coefficient
+λ:
+λ = Beta(α, α), λ∗ = max(λ, 1 − λ)
+fid(mix) = λ∗ · fid(i) + (1 − λ∗) · fid(j),
+(4)
+where λ∗ renders the mixed identity more similar to xi. To
+conserve corresponding body shape information, we use
+the original structure of xi, rather than xj as the gener-
+ation guidance. A mixed person image (see more inter-
+polated examples in Fig. 3) can be generated by combin-
+ing mixed identity features and original structure features
+G(fid(mix), sori(i)) → x′
+mix. The discriminator D attempts
+to distinguish between real and mixed images with the
+adversarial loss:
+Ladv mix = E[log D(x) + log(1 − D(x′
+mix))].
+(5)
+More discussion about feature regularization losses is
+provided in Supplementary Materials Section A.
+3.1.3
+Overall generative loss
+The overall GAN loss combines the above losses (1), (2), (3)
+and (5) with a weighting coefficient λrecon:
+Lgan = λrecon(Limg + Lfeat) + Ladv + Ladv mix.
+(6)
+Mesh switch
+Mesh rotation
+2D gray image switch
+Fig. 2. Different ways of introducing structural variance (2D gray image
+switch [25], Mesh switch and Mesh rotation) into generation.
+TABLE 4
+Performance comparison of rotating one mesh and switching two
+random meshes in the generation.
+Method
+Duke→Market
+Market→Duke
+mAP
+Rank1
+mAP
+Rank1
+2D gray image switch [25]
+60.1
+78.8
+59.5
+76.2
+Mesh switch
+74.2
+88.5
+60.6
+76.9
+Mesh rotation
+74.4
+89.7
+61.3
+78.0
+3.2
+Contrastive Module
+The described generative module generates augmented
+views of a person image, which can form positive view pairs
+for the contrastive module. By maximizing similarity be-
+tween positive pairs, the shared identity encoder is aimed at
+building robust representations that are invariant to distor-
+tions. For one identity, there are commonly several positive
+images in the dataset, which are recorded in different poses,
+camera styles and backgrounds. Only maximizing similarity
+between an image and its self-augmented views leads to
+sub-optimal performance. Moreover, previous methods [10],
+[11] have demonstrated the effectiveness of mining a large
+number of negative samples in contrastive learning.
+In order to mine more positives and a large number of
+negatives, we generate pseudo labels on a memory bank [30]
+that stores all representations M corresponding to dataset
+images X . Given a representation f t in the current epoch,
+the corresponding memory bank representation M[i] is
+updated with a momentum hyper-parameter β:
+M[i]t = β · M[i]t−1 + (1 − β) · f t,
+(7)
+where M[i]t and M[i]t−1 respectively refer to the memory
+bank representations in the t and t − 1 epochs. The mem-
+ory bank stores moving averaged representations, which
+stabilize the pseudo label generation. To further enhance
+the pseudo label quality, we compute k-reciprocal re-ranked
+Jaccard distance [62] between memory bank representations,
+which are then fed into a clustering algorithm DBSCAN [63]
+to generate pseudo labels Y = {y1, y2, ..., yN}. During the
+training, the pseudo labels are renewed at the beginning of
+each epoch. We design a Rotation Contrast and a Mixup
+Contrast respectively for the two types of generated views.
+3.2.1
+Rotation Contrast (for id-unrelated augmentation)
+As shown in Fig. 1. (d), the original image x and the
+generated image x′
+new are encoded by the shared identity
+encoder into two identity feature vectors Eid(x) → f and
+Eid(x′
+new) → f ′
+new. For a representation f with a pseudo
+label yi, we randomly sample a positive representation fpos
+
+6
+𝑃1
+𝑃2
+Fig. 3. Linear interpolation of disentangled identity features between two
+persons respectively from Market-1501 and DukeMTMC-reID.
+of the same pseudo label yi and K negative representations
+of pseudo labels different to yi from the memory bank.
+Three positive pairs can be formed, i.e., (f, fpos), (f, f ′
+new)
+and (fpos, f ′
+new). The f ′
+new and sampled K negative rep-
+resentations from the memory bank are used to form K
+negative pairs. We define three view-invariant losses to
+attract three positive pairs while repulsing K negative pairs:
+Lvi = E[log (1 +
+�K
+i=1 exp (< f ′
+new · ki > /τ)
+exp (< f · fpos > /τ)
+)],
+(8)
+L′
+vi = E[log (1 +
+�K
+i=1 exp (< f ′
+new · ki > /τ)
+exp (< f ′new · f > /τ)
+)],
+(9)
+L′′
+vi = E[log (1 +
+�K
+i=1 exp (< f ′
+new · ki > /τ)
+exp (< f ′new · fpos > /τ)
+)],
+(10)
+where < · > denotes the cosine similarity between two
+feature vectors. τ is a temperature hyper-parameter to
+sharpen the cosine similarity. ki denotes negative represen-
+tations sampled from the memory bank. Presented three loss
+functions enable the contrastive module to maximize the
+similarity between original view f, generated view f ′
+new
+and positive memory view fpos. At the same time, the
+similarity between generated view f ′
+new and K negative
+memory views is minimized, which encourages the genera-
+tive module to refine the generated view f ′
+new that should
+be different from a large number of negative samples.
+3.2.2
+Mixup Contrast (for id-related augmentation)
+The mixed image x′
+mix is encoded by the shared identity
+encoder into a mixed identity feature vector Eid(x′
+mix) →
+f ′
+mix, see Fig. 1. (e). Towards learning a smoother decision
+boundary between two clusters, as illustrated in Fig. 4, we
+design a Mixup Contrast for f ′
+mix. As certain instances in
+a cluster are close to the decision boundary between two
+prototype
+𝑷𝟏
+𝑷𝟐
+0.6𝑃1 + 0.4𝑃2
+0.4𝑃1 + 0.6𝑃2
+Fig. 4. Mixup Contrast targets at learning a smoother decision boundary
+between two persons P1 and P2 by contrasting in-between samples with
+in-between prototypes.
+clusters, whereas the others are far away, we define an
+averaged prototype for a cluster:
+pa = 1
+Na
+�
+M[i]∈ya
+M[i],
+(11)
+where Na is the number of instances belonging to the cluster
+a.
+Given a random image representation f, we use a soft-
+max cross-entropy loss Lproto to make f converge to the
+cluster prototype, which encourages the compactness of a
+cluster:
+Lproto = E[log (1 +
+�|Y|−1
+i=1
+exp (f · pi)
+exp (f · p+)
+)],
+(12)
+where p+ is the corresponding prototype of f and pi denotes
+other cluster prototypes. |Y| is the number of clusters. Given
+that certain clusters may contain more instances that are
+close to decision boundaries with other clusters, compact
+clusters provide stable mixed prototypes.
+Based on the pseudo labels, we define a mixed prototype
+vector between two clusters i and j:
+pmix = λ∗ · pi + (1 − λ∗) · pj,
+(13)
+where λ∗ is the same mixing coefficient as in Eq. (4).
+For the mixed representation f ′
+mix, we use another soft-
+max cross-entropy loss to maximize its similarity with the
+mixed prototype pmix and minimize its similarity with
+|Y| − 2 negative prototypes that do not belong to the two
+clusters i and j:
+Lmix = E[log (1 +
+�|Y|−2
+i=1
+exp (f ′
+mix · pi)
+exp (f ′
+mix · pmix)
+)].
+(14)
+As opposed to cosine similarity in Eq. (8), (9) and (10), we do
+not compute normalized similarity, as the average operation
+for computing prototype vectors performs as normalization.
+3.2.3
+Overall contrastive loss
+The overall contrastive loss combines the above losses (8),
+(9), (10), (12) and (14):
+Lcontrast = λvi(Lvi+L′
+vi+L′′
+vi)+λmix(Lproto+Lmix). (15)
+
+7
+3.3
+Joint Training
+Our proposed framework incorporates a generative module
+and a contrastive module. The generative module disentan-
+gles a person image representation into identity and struc-
+ture features, which allows for learning purified identity
+features for person ReID. The contrastive module learns
+invariance via contrasting augmented images. If we replace
+the GAN-based augmentation with traditional data aug-
+mentation techniques, both modules can be trained sepa-
+rately. However, a separate training leads to sub-optimal
+performance for both of them. To address this issue, we
+couple the two modules with a shared identity encoder in a
+joint training framework. In the setting of joint training, both
+modules work collaboratively to achieve one objective: en-
+hancing the discriminality of identity representations. Inside
+GCL+, the generative module provides both, id-unrelated
+and id-related augmentations for the contrastive module.
+On the other hand, the contrastive module maximizes the
+similarity between positive views, while repulsing negative
+views, which, in turn, refines the identity representations for
+a better generation quality. Both modules mutually promote
+each other’s performance in the joint training, leading to an
+optimal ReID performance. In our proposed framework, a
+forward propagation is firstly conducted on the generative
+module and subsequently on the contrastive module. A
+backward propagation is then conducted with an overall
+loss that combines Eq. (6) and Eq. (15):
+Loverall = Lgan + Lcontrast.
+(16)
+4
+EXPERIMENT
+4.1
+Datasets and Evaluation Protocols
+We evaluate our proposed method GCL+ on five main-
+stream person ReID benchmarks, including three image-
+based datasets: Market-1501 [64], DukeMTMC-reID [65],
+MSMT17 [14] and two video-based datasets: MARS [66]
+and DukeMTMC-VideoReID [67]. Market-1501 dataset is
+collected in front of a supermarket in Tsinghua University
+from 6 cameras. It is composed of 12,936 images of 751
+identities for training and 19,732 images of 750 identities for
+testing. DukeMTMC-reID is collected from 8 cameras in-
+stalled in the campus of Duke University. It contains 16,522
+images of 702 persons for training, 2,228 query images and
+17,661 gallery images of 702 persons for testing. MSMT17 is
+a large-scale Re-ID dataset, which includes 32,621 training
+images of 1,041 identities and 93,820 testing images of 3,060
+identities collected from 15 cameras deployed in both indoor
+and outdoor scenes. MARS is a large-scale video-based
+person ReID dataset. The dataset contains 17,503 tracklets
+of 1,261 identities collected from 6 cameras, where 625 iden-
+tities are used for training and the other 636 identities are
+used for testing. DukeMTMC-VideoReID is a video-based
+person ReID dataset derived from DukeMTMC [65] dataset.
+DukeMTMC-VideoReID contains 2,196 training tracklets of
+702 identities and 2,636 testing tracklets of other 702 identi-
+ties.
+As our method includes a GAN and a contrastive
+module, we report results for both unsupervised person
+ReID and generation quality evaluations. For unsupervised
+person ReID evaluation, we provide results under both,
+unsupervised domain adaptation and fully unsupervised
+settings. We report both, Cumulative Matching Character-
+istics (CMC) at Rank1, Rank5, Rank10 accuracies, as well
+as mean Average Precision (mAP) on the testing set. For
+the generation quality evaluation, we conduct a qualitative
+comparison between our method and state-of-the-art meth-
+ods on generated images.
+4.2
+Implementation details
+We introduce implementation details pertained to network
+design and general training configurations, as well as three-
+step optimization.
+Network design. Our network design related to the
+identity encoder Eid, the structure encoder Estr, the de-
+coder G and the discriminator D has been mainly inspired
+by [17], [25]. In the following descriptions, we denote the
+size of feature maps in channel×height×width. 1) Eid is
+an ImageNet [35] pre-trained ResNet50 [68] with slight
+modifications. The original fully connected layer is replaced
+by a batch normalization layer and a fully connected em-
+bedding layer, which outputs identity representations f in
+512×1×1 for the contrastive module. In parallel, we add a
+part average pooling that outputs identity features fid in
+2048×4×1 for the generative module. 2) Estr is composed
+of four convolutional and four residual layers, which output
+structure features fstr in 128×64×32. 3) G contains four
+residual and four convolutional layers. Every residual layer
+contains two adaptive instance normalization layers [18]
+that transform fid into scale and bias parameters. 4) D is a
+multi-scale PatchGAN [19] discriminator at 64×32, 128×64
+and 256×128.
+General training configurations. Our framework is im-
+plemented under Pytorch [69] and trained with one Nvidia
+V100 GPU. The inputs are resized to 256×128. We empir-
+ically set a large weight λrecon = 5 for reconstruction in
+Eq. (6). With a batch size of 16, we use SGD to train Eid
+and Adam optimizer to train Estr, G and D. Learning rate
+in Adam is set to 1 × 10−4 and 3.5 × 10−4 in SGD and
+are multiplied by 0.1 after 10 epochs. DBSCAN maximal
+neighborhood distance is set to 0.5 and minimal sample
+number is set to 4. The number of negatives K is 8192.
+For testing, Eid outputs representations f of dimension 512.
+For video-based person ReID, due to our GPU memory
+constraint, we randomly sample 2 frames per tracklet on
+MARS and 8 frames per tracklet on DukeMTMC-VideoReID
+for training. For testing, all the frames from each tracklet
+are used to calculate a unified tracklet representation for
+similarity ranking. Other settings are kept the same as
+image-based peron ReID settings.
+Three-stage optimization. To reduce the noise from
+imperfect generated images at early epochs, we train the
+four modules Eid, Estr, G and D in a three-stage opti-
+mization. Stage 1 Eid warm-up: we use a state-of-the-art
+unsupervised ReID method to warm up Eid, e.g., ACT [55],
+MMCL [59] and JVTC [60]. Stage 2 Estr, G and D warm-
+up: we freeze Eid and warm up Estr, G, and D only with
+the overall GAN loss in Eq. (6) for 40 epochs. Stage 3 joint
+training: we bring in the memory bank and the pseudo
+labels to jointly train the whole framework with the overall
+loss in Eq. (16) for another 20 epochs.
+
+8
+74
+74.2
+74.4
+73.8
+74
+89.1
+89.6
+89.7
+89.1
+88.7
+0.2
+0.4
+0.6
+0.8
+1
+Duke→Market
+61.2
+61
+61.3
+61.1
+60.9
+77.2
+77.5
+78
+76.9
+76.9
+0.2
+0.4
+0.6
+0.8
+1
+Market→Duke
+mAP
+Rank1
+Fig. 5. Hyper-parameter analysis on α for mixup coefficient on
+Duke→Market and Market→Duke tasks.
+88.9
+89.7
+89.6
+89.5
+89.1
+74
+74.4
+74.3
+74.1
+73.8
+0.1
+0.2
+0.3
+0.4
+0.5
+β
+54.2
+73.8
+74.4
+74
+73.8
+74.7
+88.8
+89.7
+89
+88.6
+0.02
+0.03
+0.04
+0.05
+0.06
+τ
+mAP
+Rank1
+Fig. 6. Hyper-parameter analysis on β for memory momentum and τ for
+contrastive temperature on Duke→Market task.
+4.3
+Unsupervised ReID Evaluation
+To validate the effectiveness of each component, we con-
+duct parameter analysis and ablation experiments with a
+JVTC [60] baseline. As JVTC+ is the enhanced version of
+JVTC with a camera temporal distribution post-processing,
+the performance boost from the post-processing is almost
+fixed. Thus, the ablation experiments show similar vari-
+ance with JVTC and JVTC+ baselines. We further compare
+our method with state-of-the-art unsupervised person ReID
+with three different baselines to show the generalizability of
+our method.
+89.6
+89.3
+89.7
+89
+89
+73.5
+74
+74.4
+74
+74.4
+0.6
+0.8
+1
+1.2
+1.4
+88.9
+89.1
+89.7
+89.3
+89.4
+73.8
+73.9
+74.4
+74.1
+74.1
+0.6
+0.8
+1
+1.2
+1.4
+Rank1
+mAP
+89.2
+89.4
+89.7
+89.2
+89.2
+73.6
+74.3
+74.4
+73.7
+73.7
+3
+4
+5
+6
+7
+𝝀𝒗𝒊
+𝝀𝒊𝒅
+𝝀𝒓𝒆𝒄𝒐𝒏
+Fig. 7. Hyper-parameter analysis on balancing coefficients λrecon for
+reconstruction weight, λvi for rotation contrast weight and λmix for
+mixup contrast weight on Duke→Market task.
+TABLE 5
+Performance under different clustering neighborhood distance
+threshold. ‘N’ is the approximate number of pseudo-identities.
+Threshold
+Duke→Market
+Market→Duke
+N
+mAP
+Rank1
+N
+mAP
+Rank1
+0.4
+∼642
+74.5
+89.4
+∼840
+60.9
+77.1
+0.45
+∼605
+74.4
+89.4
+∼810
+61.2
+77.4
+0.5
+∼584
+74.4
+89.7
+∼786
+61.3
+78.0
+0.55
+∼540
+73.6
+88.4
+∼744
+61.1
+76.8
+0.6
+∼500
+72.4
+87.6
+∼697
+60.7
+77.7
+4.3.1
+Parameter analysis
+Hyper-parameters, such as mixing coefficient α, memory
+momentum β and view-invariant contrastive loss temper-
+ature τ, play important roles inside our proposed GCL+
+framework for better unsupervised person ReID perfor-
+mance. We vary their values to analyze the sensitivity
+of each hyper-parameter inside our proposed framework
+GCL+.
+For Beta distribution, a larger α results in a higher pos-
+sibility that λ gets closer to 0.5. ReID performance on both
+Duke→Market and Market→Duke tasks with reference to α
+is reported in Fig. 5. On both tasks, the optimal performance
+is achieved, in case of α is around 0.6. As a consequence, α
+is set to 0.6 in our framework.
+The value of β controls the memory updating speed.
+The value of τ amplifies the cosine similarity between con-
+trastive views. An overlarge or undersized value, generally
+speaking, introduces more noise for contrastive learning.
+We report the performance variation with reference to β
+and τ on Duke→Market task in Fig. 6. We find that the
+performance is more sensitive to the similarity temperature
+τ. Based on the results, we set β to 0.2 and τ to 0.04.
+The number of possible pseudo-identities N is related
+to clustering hyper-parameters, such as maximal neigh-
+borhood distance threshold and minimal cluster sample
+number. The distance threshold of DBSCAN is the maximal
+distance between two samples for one to be considered as in
+the neighborhood of the other. A larger distance threshold
+enlarges the radius of a cluster, making more samples be
+considered into a same cluster (N becomes smaller). As
+shown in Table 5, the threshold value only slightly affects
+ReID performance.
+As our framework jointly optimize the generative and
+contrastive modules, we set weight coefficients to balance
+different loss functions in the two modules. We vary the
+balancing coefficients λrecon, λvi and λmix in Equation (6)
+and (15). The corresponding results are reported in Fig. 7.
+Overall, the different values in the tested range only slightly
+influence the final results. Based on the results, we set
+λrecon = 5, λvi = 1 and λmix = 1.
+4.3.2
+Ablation study
+Contrastive learning methods strongly rely on data aug-
+mentation to create different augmented views for con-
+trasting. Our proposed GCL+ outperforms traditional con-
+trastive learning methods by replacing traditional data aug-
+mentation techniques with GAN-based augmentation tech-
+niques. To validate the effectiveness of our proposed GAN-
+based augmentation techniques and contrastive losses, we
+conduct ablation experiments on both Market-1501 and
+DukeMTMC-reID datasets.
+Data augmentation. Data augmentation techniques can
+be caterogized into id-unrelated and id-related augmen-
+tation. Id-unrelated augmentation creates intra-image vi-
+sual distortions. In contrast, id-related augmentation cre-
+ates inter-image visual distortions, which affects image
+identities. We compare results of traditional and genera-
+tive data augmentation under fully unsupervised setting
+and domain adaptation setting in Table 6. For traditional
+data augmentation, we use multiple popular person ReID
+
+9
+TABLE 6
+Ablation study under fully unsupervised and UDA settings on traditional (w/o GAN) and generative (w/ GAN) data augmentation for the contrastive
+module. ‘Multi’ refers to multiple commonly used data augmentation techniques for person ReID, including random flipping, padding, cropping and
+erasing. ‘Rotation’ refers to our proposed mesh-guided rotation. ‘Mixup’ is conducted on image level, while ‘F-Mixup’ is conducted on feature level.
+Fully unsupervised
+ID-unrelated
+ID-related
+Market
+Duke
+Multi
+Rotation
+Mixup
+F-Mixup
+D-Mixup
+mAP
+R1
+R5
+R10
+mAP
+R1
+R5
+R10
+w/o GAN
+Baseline
+47.2
+75.4
+86.7
+90.5
+43.9
+66.8
+77.6
+81.0
+✓
+58.2
+81.1
+91.0
+93.5
+50.8
+70.8
+80.9
+83.8
+✓
+✓
+60.0
+82.5
+91.6
+94.0
+51.0
+71.1
+80.8
+84.1
+w/ GAN
+✓
+63.8
+83.4
+91.8
+94.3
+53.1
+72.8
+81.2
+83.7
+✓
+✓
+65.9
+84.8
+92.5
+94.3
+54.3
+73.6
+82.5
+84.9
+✓
+✓
+66.1
+84.3
+92.4
+94.6
+54.2
+73.7
+82.4
+85.5
+✓
+✓
+66.3
+85.3
+92.9
+94.6
+54.6
+74.2
+82.8
+85.6
+UDA
+ID-unrelated
+ID-related
+Duke→Market
+Market→Duke
+Multi
+Rotation
+Mixup
+F-Mixup
+D-Mixup
+mAP
+R1
+R5
+R10
+mAP
+R1
+R5
+R10
+w/o GAN
+Baseline
+65.0
+85.7
+93.4
+95.9
+56.5
+73.9
+84.4
+87.8
+✓
+70.4
+86.9
+94.3
+95.8
+57.0
+74.2
+84.2
+87.2
+✓
+✓
+70.7
+87.8
+94.1
+96.3
+57.7
+74.5
+85.0
+88.0
+w/ GAN
+✓
+72.5
+88.7
+94.8
+96.3
+59.9
+75.9
+86.2
+88.5
+✓
+✓
+73.0
+88.9
+94.8
+96.4
+60.4
+76.5
+85.9
+88.3
+✓
+✓
+72.7
+88.8
+95.1
+96.3
+60.2
+76.7
+86.1
+88.1
+✓
+✓
+74.4
+89.7
+95.5
+96.7
+61.3
+78.0
+86.8
+89.1
+TABLE 7
+Ablation study on three view-invariant losses in Rotation Contrast and
+two prototype losses in Mixup Contrast.
+Lvi
+L′
+vi
+L′′
+vi
+Lproto
+Lmix
+Duke→Market
+Market→Duke
+mAP
+R1
+mAP
+R1
+✓
+61.6
+82.4
+51.7
+70.6
+✓
+✓
+69.1
+85.6
+58.3
+74.8
+✓
+✓
+✓
+72.5
+88.7
+59.9
+75.9
+✓
+✓
+✓
+✓
+72.8
+88.8
+60.6
+76.9
+✓
+✓
+✓
+✓
+✓
+74.4
+89.7
+61.3
+78.0
+75%
+80%
+85%
+90%
+1
+3
+5
+7
+9
+11
+13
+15
+17
+19
+Trad
+Rot
+Full
+Fig. 8. Normalized Mutual Information (NMI) during 20 joint training
+epochs on Market-1501. ‘Trad’ refers to traditional data augmentation
+techniques. ‘Rot’ refers to id-unrelated mesh-guided rotation. ‘Full’ refers
+to combining id-unrelated mesh-guided rotation and id-related D-Mixup.
+data augmentation techniques, including random flipping,
+padding, cropping and erasing [12], as id-unrelated aug-
+mentation and Mixup [26] as id-related augmentation. Even
+with these traditional data augmentation, our contrastive
+module significantly outperforms the baseline. When we
+replace traditional data augmentation with generative data
+augmentation, the unsupervised person ReID performance
+can be further improved. Our proposed mesh-guided rota-
+tion (Rotation) works better than the multiple commonly
+used data augmentation techniques (Multi) for id-unrelated
+augmentation. Meanwhile, our proposed D-Mixup achieves
+better performance than the image-level Mixup and feature-
+level Mixup (F-Mixup) for id-related augmentation.
+Effects on pseudo labels. Robust identity representa-
+tions should have a better intra-class compactness and inter-
+class separability, which leads to better pseudo label quality.
+We evaluate our pseudo label quality by measuring the
+Normalized Mutual Information (NMI) [71] between our
+pseudo labels and ground truth labels. As illustrated in
+Fig. 8, traditional data augmentation (Trad) works well at
+the beginning, but ends up in a worse quality. We argue that
+traditional data augmentation brings to the fore undesirable
+distortions on identity features, which easily leads to over-
+fitting for id-sensitive tasks. Deviating from that, GAN-
+based augmentation introduces more noise at the beginning,
+however avoids over-fitting in the final training epochs. In
+addition, our full GCL+ (Full) conducts both GAN-based
+id-unrelated and id-related augmentation, which achieves
+better pseudo label quality than only id-unrelated mesh-
+guided rotation (Rot).
+Contrastive loss. To learn maximal invariance from gen-
+erated image and memory stored image, we have formed
+three positive pairs for Rotation Contrast, namely (f, fpos),
+(f, f ′
+new) and (fpos, f ′
+new). By maximizing the similarity be-
+tween these three positive pairs in Equation (8), (9) and (10),
+our objective is to build identity representations, which are
+invariant to instance-level pose, view-point and background
+variance. Meanwhile, we use identity prototypes and mixed
+prototypes in Mixup Contrast to learn a smoother class-level
+decision boundary with Equation (12) and (14). To confirm
+the contribution from these contrastive losses, we gradually
+add each into our framework and report the corresponding
+results in Table 7. The results indicate that our proposed
+contrastive losses effectively contribute to learning robust
+representations for unsupervised person ReID.
+4.3.3
+Comparison with state-of-the-art methods
+Image-based person ReID. We compare our proposed
+GCL+ with state-of-the-art unsupervised ReID methods
+under three purely unsupervised and four unsupervised
+domain adaptation evaluation protocols. We evaluate the
+performance of GCL+ with different baselines, including
+MMCL [59], JVTC [60] and ACT [55], to demonstrate the
+generalizability of our proposed method.
+Under the fully unsupervised setting, we report as-
+sociated results on Market-1501, DukeMTMC-reID and
+MSMT17 dataset in Table 8. We firstly provide results of
+state-of-the-art methods, including BUC [57], SoftSim [58],
+TSSL [61], MMCL [59], JVTC [60], JVTC+ [60], Meta-
+Cam [70], as well as our previous work GCL [9], on the
+three datasets. Our proposed method GCL+ significantly
+improves the unsupervised person ReID performance from
+
+10
+TABLE 8
+Comparison of fully unsupervised ReID methods (%) on Market1501, DukeMTMC-reID and MSMT17 datasets. We test our proposed method on
+several baselines, see names in parentheses.
+Method
+Reference
+Market1501
+DukeMTMC-reID
+MSMT17
+mAP
+R1
+R5
+R10
+mAP
+R1
+R5
+R10
+mAP
+R1
+R5
+R10
+BUC [57]
+AAAI’19
+29.6
+61.9
+73.5
+78.2
+22.1
+40.4
+52.5
+58.2
+-
+-
+-
+-
+SoftSim [58]
+CVPR’20
+37.8
+71.7
+83.8
+87.4
+28.6
+52.5
+63.5
+68.9
+-
+-
+-
+-
+TSSL [61]
+AAAI’20
+43.3
+71.2
+-
+-
+38.5
+62.2
+-
+-
+-
+-
+-
+-
+MMCL [59]
+CVPR’20
+45.5
+80.3
+89.4
+92.3
+40.2
+65.2
+75.9
+80.0
+11.2
+35.4
+44.8
+49.8
+JVTC [60]
+ECCV’20
+41.8
+72.9
+84.2
+88.7
+42.2
+67.6
+78.0
+81.6
+15.1
+39.0
+50.9
+56.8
+JVTC+ [60]
+ECCV’20
+47.5
+79.5
+89.2
+91.9
+50.7
+74.6
+82.9
+85.3
+17.3
+43.1
+53.8
+59.4
+MetaCam [70]
+CVPR’21
+61.7
+83.9
+92.3
+-
+53.8
+73.8
+84.2
+-
+15.5
+35.2
+48.3
+-
+GCL(MMCL) [9]
+CVPR’21
+54.9
+83.7
+91.6
+94.0
+49.3
+69.7
+79.7
+82.8
+-
+-
+-
+-
+GCL(JVTC) [9]
+CVPR’21
+63.4
+83.7
+91.6
+94.3
+53.3
+72.4
+82.0
+84.9
+18.0
+41.6
+53.2
+58.4
+GCL(JVTC+) [9]
+CVPR’21
+66.8
+87.3
+93.5
+95.5
+62.8
+82.9
+87.1
+88.5
+21.3
+45.7
+58.6
+64.5
+GCL+(MMCL)
+This paper
+56.0
+84.0
+91.4
+93.7
+49.5
+70.2
+80.2
+83.3
+-
+-
+-
+-
+GCL+(JVTC)
+This paper
+66.3
+85.3
+92.9
+94.6
+54.6
+74.2
+82.8
+85.6
+19.2
+44.7
+56.4
+61.4
+GCL+(JVTC+)
+This paper
+69.3
+89.0
+94.6
+96.0
+63.5
+83.1
+87.4
+88.8
+22.0
+47.9
+61.3
+67.1
+TABLE 9
+Comparison of unsupervised domain adaptive ReID methods (%) between Market1501, DukeMTMC-reID and MSMT17 datasets. We test our
+proposed method on several baselines, see names in parentheses.
+Method
+Reference
+Duke→Market
+Market→Duke
+Market→MSMT17
+Duke→MSMT17
+mAP
+R1
+R5
+R10
+mAP
+R1
+R5
+R10
+mAP
+R1
+R5
+R10
+mAP
+R1
+R5
+R10
+ECN [7]
+CVPR’19
+43.0
+75.1
+87.6
+91.6
+40.4
+63.3
+75.8
+80.4
+8.5
+25.3
+36.3
+42.1
+10.2
+30.2
+41.5
+46.8
+PDA [21]
+ICCV’19
+47.6
+75.2
+86.3
+90.2
+45.1
+63.2
+77.0
+82.5
+-
+-
+-
+-
+-
+-
+-
+-
+CR-GAN [41]
+ICCV’19
+54.0
+77.7
+89.7
+92.7
+48.6
+68.9
+80.2
+84.7
+-
+-
+-
+-
+-
+-
+-
+-
+SSG [54]
+ICCV’19
+58.3
+80.0
+90.0
+92.4
+53.4
+73.0
+80.6
+83.2
+13.2
+31.6
+49.6
+-
+13.3
+32.2
+51.2
+-
+MMCL [59]
+CVPR’20
+60.4
+84.4
+92.8
+95.0
+51.4
+72.4
+82.9
+85.0
+15.1
+40.8
+51.8
+56.7
+16.2
+43.6
+54.3
+58.9
+ACT [55]
+AAAI’20
+60.6
+80.5
+-
+-
+54.5
+72.4
+-
+-
+-
+-
+-
+-
+-
+-
+-
+-
+DG-Net++ [17]
+ECCV’20
+61.7
+82.1
+90.2
+92.7
+63.8
+78.9
+87.8
+90.4
+22.1
+48.4
+60.9
+66.1
+22.1
+48.8
+60.9
+65.9
+JVTC [60]
+ECCV’20
+61.1
+83.8
+93.0
+95.2
+56.2
+75.0
+85.1
+88.2
+19.0
+42.1
+53.4
+58.9
+20.3
+45.4
+58.4
+64.3
+ECN+ [56]
+TPAMI’20
+63.8
+84.1
+92.8
+95.4
+54.4
+74.0
+83.7
+87.4
+15.2
+40.4
+53.1
+58.7
+16.0
+42.5
+55.9
+61.5
+JVTC+ [60]
+ECCV’20
+67.2
+86.8
+95.2
+97.1
+66.5
+80.4
+89.9
+92.2
+25.1
+48.6
+65.3
+68.2
+27.5
+52.9
+70.5
+75.9
+MMT [8]
+ICLR’20
+71.2
+87.7
+94.9
+96.9
+65.1
+78.0
+88.8
+92.5
+22.9
+49.2
+63.1
+68.8
+23.3
+50.1
+63.9
+69.8
+CAIL [50]
+ECCV’20
+71.5
+88.1
+94.4
+96.2
+65.2
+79.5
+88.3
+91.4
+20.4
+43.7
+56.1
+61.9
+24.3
+51.7
+64.0
+68.9
+MetaCam [70]
+CVPR’21
+76.5
+90.1
+-
+-
+65.0
+79.5
+-
+-
+-
+-
+-
+-
+-
+-
+-
+-
+GCL(ACT) [9]
+CVPR’21
+66.7
+83.9
+91.4
+93.4
+55.4
+71.9
+81.6
+84.6
+-
+-
+-
+-
+-
+-
+-
+-
+GCL(JVTC) [9]
+CVPR’21
+73.4
+89.1
+95.0
+96.6
+60.4
+77.2
+86.2
+88.4
+21.5
+45.0
+57.1
+66.5
+24.9
+50.8
+63.4
+68.9
+GCL(JVTC+) [9]
+CVPR’21
+75.4
+90.5
+96.2
+97.1
+67.6
+81.9
+88.9
+90.6
+27.0
+51.1
+63.9
+69.9
+29.7
+54.4
+68.2
+74.2
+GCL+(ACT)
+This paper
+67.5
+84.3
+92.6
+94.2
+56.8
+73.5
+82.8
+85.1
+-
+-
+-
+-
+-
+-
+-
+-
+GCL+(JVTC)
+This paper
+74.4
+89.7
+95.5
+96.7
+61.3
+78.0
+86.8
+89.1
+23.0
+48.3
+60.6
+65.8
+25.5
+52.7
+65.2
+70.2
+GCL+(JVTC+)
+This paper
+76.5
+91.6
+96.3
+97.6
+68.3
+82.6
+89.4
+91.2
+27.8
+53.8
+66.9
+72.5
+31.5
+57.9
+70.3
+76.1
+the three baselines MMCL, JVTC and JVTC+. The proposed
+new D-Mixup and Mixup Contrast in our framework GCL+
+consistently surpasses the performance of our previous
+work GCL with the three different baselines. With the strong
+baseline JVTC+, our method achieves state-of-the-art perfor-
+mance on the three datasets.
+Under the unsupervised domain adaptation setting, we
+report related results on four mainstream benchmarks, in-
+cluding Duke→Market, Market→Duke, Market→MSMT17
+and Duke→MSMT17 in Table 9. Our proposed method
+GCL+ additionally achieves better performance than state-
+of-the-art methods, including ECN [7], PDA [21], CR-GAN
+[41], SSG [54], MMCL [59], ACT [55], DG-Net++ [17], JVTC
+[60], ECN+ [56], JVTC+ [60], MMT [8], CAIL [50], Meta-
+Cam [70], as well as our previous work GCL [9]. Among
+these methods, PDA, CR-GAN and DG-Net++ share certain
+similarity with our proposed method GCL+, in that they
+are based on GAN. However, PDA and DG-Net++ used
+either 2D skeleton or random gray-scaled images as guid-
+ance, which could not preserve body shape information.
+Further, PDA, CR-GAN and DG-Net++ did not manipulate
+identity features to generate in-between identity images.
+CAIL [50] has considered cross-domain Mixup, where in-
+terpolated structures may introduce more noise on identity
+features. Our proposed D-Mixup does not suffer from such
+interpolated structures. In addition, cross-domain Mixup
+interpolates images from two domains, while our proposed
+D-Mixup interpolates intra-domain images, which is more
+flexible for fully unsupervised ReID.
+Video-based person ReID. We compare our proposed
+GCL+ with state-of-the-art unsupervised video person ReID
+methods on MARS and DukeMTMC-VideoReID datasets.
+RACE [72] and EUG [67] leverage a labeled video tracklet
+per identity to initialize their models. These one-example
+video-based ReID methods can not actually be considered as
+unsupervised. DAL [73], TAUDL [74] and UTAL [75] utilize
+camera labels of each tracklet and try to associate tracklets of
+a same person across different cameras. OIM [76], BUC [57]
+and TSSL [61] are fully unsupervised video person ReID
+methods. We use the fully unsupervised method BUC as
+our baseline. As shown in Table 10, our proposed methods
+GCL (view-point augmentation) and GCL+ (view-point and
+in-between identity augmentation) significantly outperform
+previous unsupervised video-based person ReID methods.
+
+11
+TABLE 10
+Comparison with the state-of-the-art methods on two video-based re-ID datasets, MARS and DukeMTMC-VideoReID. The “Labels” column
+indicates the labels used in each method. “OneEx” denotes the one-example annotation per identity. “Camera” refers to camera annotation.
+“Baseline (BUC)” refers to our reproduced results.
+Method
+Labels
+MARS
+DukeMTMC-VideoReID
+mAP
+R1
+R5
+R10
+mAP
+R1
+R5
+R10
+RACE [72]
+OneEx
+24.5
+43.2
+57.1
+62.1
+-
+-
+-
+-
+EUG [67]
+OneEx
+42.4
+62.6
+74.9
+-
+63.2
+72.7
+84.1
+-
+DAL [73]
+Camera
+23.0
+49.3
+65.9
+72.2
+-
+-
+-
+-
+TAUDL [74]
+Camera
+29.1
+43.8
+59.9
+72.8
+-
+-
+-
+-
+UTAL [75]
+Camera
+35.2
+49.9
+66.4
+77.8
+-
+-
+-
+-
+OIM [76]
+None
+13.5
+33.7
+48.1
+54.8
+43.8
+51.1
+70.5
+76.2
+BUC [57]
+None
+29.4
+55.1
+68.3
+72.8
+66.7
+74.8
+86.8
+89.7
+TSSL [61]
+None
+30.5
+56.3
+-
+-
+64.6
+73.9
+-
+-
+Baseline (BUC [57])
+None
+32.0
+51.1
+66.5
+71.6
+67.1
+72.9
+86.2
+90.0
+GCL
+None
+48.6
+64.8
+77.5
+82.0
+75.9
+80.1
+90.5
+93.7
+GCL+
+None
+50.1
+66.5
+78.7
+82.2
+76.3
+80.9
+91.5
+94.2
+4.4
+Generation Quality Evaluation
+4.4.1
+Ablation study
+We conduct a qualitative ablation study, represented in
+Fig. 9 to demonstrate that our proposed contrastive module
+can improve generative quality for person image generation.
+Unconditional GANs learn a data distribution via recon-
+struction and adversarial training of each image, which
+then generate new images that fit the learned distribution.
+However, unconditional GANs generate from features of a
+single image and neglect the shared features of different
+images of one person (or class). Conditional GANs generally
+use human-annotated identity labels to learn shared class-
+level features, which are more view-invariant. Our pro-
+posed GCL+ introduces an unsupervised way to learn view-
+invariant class-level features for person image generation by
+contrasting pseudo positive views.
+We
+illustrate
+two
+examples
+respectively
+from
+the
+Market-1501 and DukeMTMC-reID datasets in Fig. 9 to
+validate the effectiveness of our proposed contrastive mod-
+ule for person image generation. Given a target person, a
+robust identity representation should contain salient fea-
+tures shared by the majority of observations in different
+view-points and poses. In the case that GCL+ is trained
+without Lcontrast, our generative module tends to focus
+only on salient features of original image (black backpack
+for the first example and blue jacket for the second example),
+while neglecting salient features of other images of the same
+person (yellow t-shirt for the first example and red backpack
+for the second example). The contrastive module ensures the
+consistency of identity features for generation in different
+poses and view-points.
+4.4.2
+Comparison with state-of-the-art methods
+We conduct a qualitative comparison between our pro-
+posed method GCL+ and state-of-the-art GAN-based per-
+son ReID methods, including FD-GAN [20], IS-GAN [42],
+DG-NET [25] and DG-NET++ [17]. We re-implement these
+GAN-based person ReID methods based on their published
+source code and generate six images per real image of
+the Market-1501 dataset, as shown in Fig. 10. FD-GAN,
+IS-GAN and DG-Net are supervised methods, which rely
+on human-annotated labels to learn robust identity-level
+features. We observe that images generated by FD-GAN and
+IS-GAN suffer from evident visual blur, which may lose
+detailed identity information after generation. Compared
++
++
+𝑓𝑖𝑑
+𝑓𝑠𝑡𝑟
+w/o 𝐿𝑐𝑜𝑛𝑡𝑟𝑎𝑠𝑡
+w/ 𝐿𝑐𝑜𝑛𝑡𝑟𝑎𝑠𝑡
++
++
+Same ID
+example
+Fig. 9. Qualitative ablation study on the effectiveness of contrastive
+loss in Eq. (15) for generation quality. Lcontrast allows for preserving
+salient features from other views (yellow t-shirt for the first example
+and red backpack for the second example) in identity representations
+for generation in different poses and view-points.
+TABLE 11
+Examples of 3D mesh guided generation on DukeMTMC-reID dataset.
+0°
+45°
+90°
+135°
+180°
+225°
+270°
+315°
+→
+→
+
+12
+FD-GAN
+GCL+(ours)
+IS-GAN
+DG-Net
+Real
+DG-Net++
+Fig. 10. Comparison of generated images on Market-1501 dataset. Examples of FD-GAN, IS-GAN, DG-Net, DG-Net++ and GCL+ are generated
+from same real images shown in the figure. We note that DG-Net++ and GCL+ are unsupervised methods.
+TABLE 12
+Examples of 3D mesh guided generation on MSMT17 dataset.
+0°
+45°
+90°
+135°
+180°
+225°
+270°
+315°
+→
+→
+to FD-GAN and IS-GAN, DG-Net can generate sharper
+images. However, using randomly switched gray-scaled im-
+ages as guidance is prone to result in incoherent body shape
+and carrying. More comparison on the generative quality
+between FD-GAN, IS-GAN, DG-Net and our method is
+provided in Supplementary Materials Section B. As an UDA
+method, DG-Net++ uses cross-domain gray-scaled images
+as guidance, which, however, shares same problems in gen-
+eration as DG-Net. Different from DG-Net++, our proposed
+GCL+ is a fully unsupervised ReID method, which directly
+augments data diversity in the target domain without the
+need for a labeled source domain. Moreover, an image
+in GCL+ is generated from its own rotated mesh, which
+helps to conserve body shape information and does not add
+extra carrying structures. The generated images from GCL+
+have higher quality and similarity to real images than other
+methods. To validate the generative quality on DukeMTMC-
+reID and MSMT17 datasets, we provide more examples in
+Table 11 and Table 12. Consistency in the id-related space
+and variance in the id-unrelated space validate the purity
+(disentanglement quality) of identity representations in our
+framework GCL+. We further provide tracklet examples
+before and after our view-point rotation for video-based
+person ReID in Fig. 11. The results show that our method
+also works well for video-based person ReID.
+4.4.3
+Failure case analysis
+We show some failure cases from the rotation generative
+model in Fig. 12. Actually, when there exists inconsistent
+front-side and back-side patterns, the rotation-based genera-
+tion can hardly generate accurate images after large rotation.
+Rotate
+Rotate
+MARS tracklet
+DukeMTMC-VideoReID tracklet
+Fig. 11. Examples of tracklet frames before and after our view-point ro-
+tation. Tracklets are respectively sampled from MARS and DukeMTMC-
+VideoReID datasets.
+For example, the model may consider visual patterns only
+in the back side (backpack in the first row) and patterns
+only in the front side (carrying objects in the second row) as
+whole-body appearance features for generation. One possi-
+ble solution is to use a 3D human-object arrangement mesh
+generator [77] to help the generative model distinguish
+humans and objects.
+5
+CONCLUSION
+In this paper, we propose an enhanced joint generative and
+contrastive learning (GCL+) framework for unsupervised
+person ReID. The framework is composed of a generative
+module for data augmentation, as well as a contrastive module
+aimed at learning invariance from generated variance. For
+the generative module, we propose a 3D mesh guided GAN to
+realize id-unrelated and id-related augmentation by respec-
+tively rotating 3D meshes as generation guidance and in-
+terpolating two identity representations. For the contrastive
+
+13
+0°
+45°
+90°
+135°
+180°
+225°
+270°
+315°
+real
+Fig. 12. Failure cases of rotation-based generation. First row: the back-
+pack can be generated onto the front side. Second row: the carrying
+object can be generated onto the back side.
+module, we design Rotation Contrast and Mixup Contrast, re-
+spectively for the two data augmentation techniques to learn
+robust identity representations. Extensive experiments are
+conducted to validate the superiority of the proposed GAN-
+based augmentation over traditional augmentation tech-
+niques for contrastive representation learning. The genera-
+tive module benefits from learned robust identity represen-
+tations that preserve fine-grained identity information for
+better generation quality. GCL+ outperforms state-of-the-art
+methods under both, fully unsupervised and unsupervised
+domain adaptation settings. Moreover, our contrastive mod-
+ule can be regarded as a contrastive discriminator in a GAN,
+which provides a new unsupervised approach for identity-
+preserving person image generation.
+ACKNOWLEDGMENTS
+This work has been supported by the French government,
+through the 3IA Cˆote d’Azur Investments in the Future
+project managed by the National Research Agency (ANR)
+with the reference number ANR-19-P3IA-0002. The authors
+are grateful to the OPAL infrastructure from Universit´e Cˆote
+d’Azur for providing resources and support.
+REFERENCES
+[1]
+M. Ye, J. Shen, G. Lin, T. Xiang, L. Shao, and S. C. H. Hoi, “Deep
+learning for person re-identification: A survey and outlook,” IEEE
+TPAMI, 2021.
+[2]
+S. Karanam, M. Gou, Z. Wu, A. Rates-Borras, O. Camps, and
+R. Radke, “A systematic evaluation and benchmark for person re-
+identification: Features, metrics, and datasets,” IEEE TPAMI, 2019.
+[3]
+Y. Sun, L. Zheng, Y. Yang, Q. Tian, and S. Wang, “Beyond part
+models: Person retrieval with refined part pooling (and a strong
+convolutional baseline),” in ECCV, 2018.
+[4]
+H. Chen, B. Lagadec, and F. Bremond, “Learning discriminative
+and generalizable representations by spatial-channel partition for
+person re-identification,” in WACV, 2020.
+[5]
+J. Song, Y. Yang, Y.-Z. Song, T. Xiang, and T. M. Hospedales, “Gen-
+eralizable person re-identification by domain-invariant mapping
+network,” in CVPR, June 2019.
+[6]
+X. Jin, C. Lan, W. Zeng, Z. Chen, and L. Zhang, “Style normaliza-
+tion and restitution for generalizable person re-identification,” in
+CVPR, June 2020.
+[7]
+Z. Zhong, L. Zheng, Z. Luo, S. Li, and Y. Yang, “Invariance matters:
+Exemplar memory for domain adaptive person re-identification,”
+in CVPR, 2019.
+[8]
+Y. Ge, D. Chen, and H. Li, “Mutual mean-teaching: Pseudo la-
+bel refinery for unsupervised domain adaptation on person re-
+identification,” in ICLR, 2020.
+[9]
+H. Chen, Y. Wang, B. Lagadec, A. Dantcheva, and F. Bremond,
+“Joint generative and contrastive learning for unsupervised per-
+son re-identification,” in CVPR, 2021.
+[10] T. Chen, S. Kornblith, M. Norouzi, and G. Hinton, “A simple
+framework for contrastive learning of visual representations,” in
+ICML, 2020.
+[11] K. He, H. Fan, Y. Wu, S. Xie, and R. Girshick, “Momentum contrast
+for unsupervised visual representation learning,” in CVPR, 2020.
+[12] Z. Zhong, L. Zheng, G. Kang, S. Li, and Y. Yang, “Random erasing
+data augmentation,” in AAAI, 2020.
+[13] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley,
+S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial
+nets,” in NeurIPS, 2014.
+[14] L. Wei, S. Zhang, W. Gao, and Q. Tian, “Person transfer gan to
+bridge domain gap for person re-identification,” in CVPR, 2018.
+[15] S. Bak, P. Carr, and J.-F. Lalonde, “Domain adaptation through
+synthesis for unsupervised person re-identification,” in ECCV,
+2018.
+[16] Z. Zhong, L. Zheng, S. Li, and Y. Yang, “Generalizing a person
+retrieval model hetero- and homogeneously,” in ECCV, 2018.
+[17] Y. Zou, X. Yang, Z. Yu, B. V. K. V. Kumar, and J. Kautz,
+“Joint disentangling and adaptation for cross-domain person re-
+identification,” in ECCV, 2020.
+[18] X. Huang and S. Belongie, “Arbitrary style transfer in real-time
+with adaptive instance normalization,” in ICCV, 2017.
+[19] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image
+translation with conditional adversarial networks,” in CVPR, 2017.
+[20] Y. Ge, Z. Li, H. Zhao, G. Yin, S. Yi, X. Wang, and H. Li, “Fd-
+gan: Pose-guided feature distilling gan for robust person re-
+identification,” in NeurIPS, 2018.
+[21] Y.-J. Li, C.-S. Lin, Y.-B. Lin, and Y.-C. F. Wang, “Cross-dataset
+person re-identification via unsupervised pose disentanglement
+and adaptation,” in ICCV, 2019.
+[22] Z. Cao, T. Simon, S.-E. Wei, and Y. Sheikh, “Realtime multi-person
+2d pose estimation using part affinity fields,” in CVPR, 2017.
+[23] A. Kanazawa, M. J. Black, D. W. Jacobs, and J. Malik, “End-to-end
+recovery of human shape and pose,” in CVPR, 2018.
+[24] Z. Zhong, L. Zheng, Z. Zheng, S. Li, and Y. Yang, “Camera style
+adaptation for person re-identification,” in CVPR, 2018.
+[25] Z. Zheng, X. Yang, Z. Yu, L. Zheng, Y. Yang, and J. Kautz,
+“Joint discriminative and generative learning for person re-
+identification,” in CVPR, 2019.
+[26] H. Zhang, M. Cisse, Y. N. Dauphin, and D. Lopez-Paz, “mixup:
+Beyond empirical risk minimization,” in ICLR, 2018.
+[27] V. Verma, A. Lamb, C. Beckham, A. Najafi, I. Mitliagkas, D. Lopez-
+Paz, and Y. Bengio, “Manifold mixup: Better representations by
+interpolating hidden states,” in ICML, 2019.
+[28] C. Beckham, S. Honari, V. Verma, A. M. Lamb, F. Ghadiri, R. D.
+Hjelm, Y. Bengio, and C. Pal, “On adversarial mixup resynthesis,”
+NeurIPS, 2019.
+[29] R. Hadsell, S. Chopra, and Y. LeCun, “Dimensionality reduction
+by learning an invariant mapping,” in CVPR, 2006.
+[30] Z. Wu, Y. Xiong, S. X. Yu, and D. Lin, “Unsupervised feature
+learning via non-parametric instance discrimination,” in CVPR,
+2018.
+[31] M. Caron, I. Misra, J. Mairal, P. Goyal, P. Bojanowski, and A. Joulin,
+“Unsupervised learning of visual features by contrasting cluster
+assignments,” in NeurIPS, 2020.
+[32] J.-B. Grill, F. Strub, F. Altch´e, C. Tallec, P. H. Richemond,
+E. Buchatskaya, C. Doersch, B. A. Pires, Z. D. Guo, M. G. Azar
+et al., “Bootstrap your own latent: A new approach to self-
+supervised learning,” in NeurIPS, 2020.
+[33] X. Chen and K. He, “Exploring simple siamese representation
+learning,” in CVPR, 2021.
+[34] X. Chen, H. Fan, R. Girshick, and K. He, “Improved baselines with
+momentum contrastive learning,” arXiv preprint arXiv:2003.04297,
+2020.
+[35] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma,
+Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. Berg, and
+L. Fei-Fei, “Imagenet large scale visual recognition challenge,”
+IJCV, 2015.
+[36] Z. Zheng, L. Zheng, and Y. Yang, “Unlabeled samples generated
+by gan improve the person re-identification baseline in vitro,” in
+ICCV, 2017.
+[37] A. Radford, L. Metz, and S. Chintala, “Unsupervised represen-
+tation learning with deep convolutional generative adversarial
+networks,” in ICLR, 2016.
+[38] X. Qian, Y. Fu, T. Xiang, W. Wang, J. Qiu, Y. Wu, Y.-G. Jiang,
+and X. Xue, “Pose-normalized image generation for person re-
+identification,” in ECCV, 2018.
+
+14
+[39] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-
+image translation using cycle-consistent adversarial networks,” in
+CVPR, 2017.
+[40] Y. Huang, Q. Wu, J. Xu, and Y. Zhong, “Sbsgan: Suppression
+of inter-domain background shift for person re-identification,” in
+ICCV, 2019.
+[41] Y. Chen, X. Zhu, and S. Gong, “Instance-guided context rendering
+for cross-domain person re-identification,” in ICCV, 2019.
+[42] C. Eom and B. Ham, “Learning disentangled representation for
+robust person re-identification,” in NeurIPS, 2019.
+[43] Y. Tokozume, Y. Ushiku, and T. Harada, “Between-class learning
+for image classification,” in CVPR, 2018.
+[44] S. Yun, D. Han, S. J. Oh, S. Chun, J. Choe, and Y. Yoo, “Cutmix:
+Regularization strategy to train strong classifiers with localizable
+features,” in ICCV, 2019.
+[45] D. Berthelot, N. Carlini, I. Goodfellow, N. Papernot, A. Oliver,
+and C. Raffel, “Mixmatch: A holistic approach to semi-supervised
+learning,” in NeurIPS, 2019.
+[46] D. Berthelot, N. Carlini, E. D. Cubuk, A. Kurakin, K. Sohn,
+H. Zhang, and C. Raffel, “Remixmatch: Semi-supervised learn-
+ing with distribution matching and augmentation anchoring,” in
+ICLR, 2020.
+[47] M. Xu, J. Zhang, B. Ni, T. Li, C. Wang, Q. Tian, and W. Zhang,
+“Adversarial domain adaptation with domain mixup,” in AAAI,
+2020.
+[48] Z. Zhong, L. Zhu, Z. Luo, S. Li, Y. Yang, and N. Sebe, “Openmix:
+Reviving known knowledge for discovering novel visual cate-
+gories in an open world,” in CVPR, 2021.
+[49] D. Hendrycks, N. Mu, E. D. Cubuk, B. Zoph, J. Gilmer, and B. Lak-
+shminarayanan, “Augmix: A simple data processing method to
+improve robustness and uncertainty,” in ICLR, 2020.
+[50] C. Luo, C. Song, and Z. Zhang, “Generalizing person re-
+identification by camera-aware invariance learning and cross-
+domain mixup,” in ECCV, 2020.
+[51] J. Wang, X. Zhu, S. Gong, and W. Li, “Transferable joint attribute-
+identity deep learning for unsupervised person re-identification,”
+CVPR, 2018.
+[52] S. Lin, H. Li, C.-T. Li, and A. C. Kot, “Multi-task mid-level
+feature alignment network for unsupervised cross-dataset person
+re-identification,” in BMVC, 2018.
+[53] H.-X. Yu, W. Zheng, A. Wu, X. Guo, S. Gong, and J. Lai, “Unsu-
+pervised person re-identification by soft multilabel learning,” in
+CVPR, 2019.
+[54] Y. Fu, Y. Wei, G. Wang, Y. Zhou, H. Shi, and T. S. Huang,
+“Self-similarity grouping: A simple unsupervised cross domain
+adaptation approach for person re-identification,” in ICCV, 2019.
+[55] F. Yang, K. Li, Z. Zhong, Z. Luo, X. Sun, H. Cheng, X. Guo,
+F. Huang, R. Ji, and S. Li, “Asymmetric co-teaching for unsuper-
+vised cross-domain person re-identification.” in AAAI, 2020.
+[56] Z. Zhong, L. Zheng, Z. Luo, S. Li, and Y. Yang, “Learning to adapt
+invariance in memory for person re-identification,” IEEE TPAMI,
+2020.
+[57] Y. Lin, X. Dong, L. Zheng, Y. Yan, and Y. Yang, “A bottom-up
+clustering approach to unsupervised person re-identification,” in
+AAAI, 2019.
+[58] Y. Lin, L. Xie, Y. Wu, C. Yan, and Q. Tian, “Unsupervised person
+re-identification via softened similarity learning,” in CVPR, 2020.
+[59] D. Wang and S. Zhang, “Unsupervised person re-identification via
+multi-label classification,” in CVPR, 2020.
+[60] J. Li and S. Zhang, “Joint visual and temporal consistency for
+unsupervised domain adaptive person re-identification,” in ECCV,
+2020.
+[61] G. Wu, X. Zhu, and S. Gong, “Tracklet self-supervised learning for
+unsupervised person re-identification.” in AAAI, 2020.
+[62] Z. Zhong, L. Zheng, D. Cao, and S. Li, “Re-ranking person re-
+identification with k-reciprocal encoding,” in CVPR, 2017.
+[63] M. Ester, H.-P. Kriegel, J. Sander, and X. Xu, “A density-based
+algorithm for discovering clusters in large spatial databases with
+noise,” in KDD, 1996.
+[64] L. Zheng, L. Shen, L. Tian, S. Wang, J. Wang, and Q. Tian, “Scalable
+person re-identification: A benchmark,” ICCV, 2015.
+[65] E. Ristani, F. Solera, R. Zou, R. Cucchiara, and C. Tomasi, “Per-
+formance measures and a data set for multi-target, multi-camera
+tracking,” in ECCVW, 2016.
+[66] L. Zheng, Z. Bie, Y. Sun, J. Wang, C. Su, S. Wang, and
+Q. Tian, “Mars: A video benchmark for large-scale person re-
+identification,” in ECCV, 2016.
+[67] Y. Wu, Y. Lin, X. Dong, Y. Yan, W. Ouyang, and Y. Yang, “Ex-
+ploit the unknown gradually: One-shot video-based person re-
+identification by stepwise learning,” CVPR, 2018.
+[68] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for
+image recognition,” in CVPR, 2016.
+[69] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan,
+T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison,
+A. K¨opf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy,
+B. Steiner, L. Fang, J. Bai, and S. Chintala, “Pytorch: An imperative
+style, high-performance deep learning library,” in NeurIPS, 2019.
+[70] F. Yang, Z. Zhong, Z. Luo, Y. Cai, S. Li, and S. Nicu, “Joint noise-
+tolerant learning and meta camera shift adaptation for unsuper-
+vised person re-identification,” in CVPR, 2021.
+[71] A. Strehl and J. Ghosh, “Cluster ensembles — a knowledge reuse
+framework for combining multiple partitions,” JMLR, 2002.
+[72] M. Ye, X. Lan, and P. C. Yuen, “Robust anchor embedding for
+unsupervised video person re-identification in the wild,” in ECCV,
+2018.
+[73] Y. Chen, X. Zhu, and S. Gong, “Deep association learning for
+unsupervised video person re-identification,” in BMVC, 2018.
+[74] M.
+Li,
+X.
+Zhu,
+and
+S.
+Gong,
+“Unsupervised
+person
+re-
+identification by deep learning tracklet association,” in ECCV,
+2018.
+[75] ——, “Unsupervised tracklet person re-identification,” IEEE trans-
+actions on pattern analysis and machine intelligence, 2019.
+[76] T. Xiao, S. Li, B. Wang, L. Lin, and X. Wang, “Joint detection and
+identification feature learning for person search,” CVPR, 2017.
+[77] J. Y. Zhang, S. Pepose, H. Joo, D. Ramanan, J. Malik, and
+A. Kanazawa, “Perceiving 3d human-object spatial arrangements
+from a single image in the wild,” in ECCV, 2020.
+Hao Chen received the B.S. degree from Wuhan
+University in 2014, and the M.S. degree from
+CentraleSup´elec and Universit´e Paris Saclay in
+2017. He is currently working towards his Ph.D.
+at Inria Sophia Antipolis and Universit´e Cˆote
+d’Azur. His research interests include person re-
+identification and unsupervised learning. Home-
+page: https://chenhao2345.github.io/.
+Yaohui Wang received the B.S. degree from
+Xidian University in 2015, and the M.S. de-
+gree from ENSIIE and Universit´e Paris Saclay
+in 2017. He is currently working towards his
+Ph.D. at Inria Sophia Antipolis, STARS Team
+and Universit´e Cˆote d’Azur. His current research
+focuses on image and video synthesis, activity
+recognition and representation learning.
+Benoit Lagadec is a Research Engineer at Eu-
+ropean Systems Integration. He currently works
+on developing video analysis solutions based
+on abnormal human behavior. Previously, he
+worked in public research at Ifremer, where he
+was able to develop image processing algo-
+rithms adapted to the difficulty of underwater
+imaging : denoising, segmentation.
+
+二15
+Antitza Dantcheva is a Research Scientist
+(CRCN) with the STARS team of INRIA Sophia
+Antipolis, France. Previously, she was a Marie
+Curie fellow at Inria and a Postdoctoral Fellow
+at the Michigan State University and the West
+Virginia University, USA. She received her Ph.D.
+degree from T´el´ecom ParisTech/Eurecom in im-
+age processing and biometrics in 2011. Her re-
+search is in computer vision and specifically in
+designing algorithms that seek to learn suitable
+representations of the human face in interpreta-
+tion and generation.
+Francois Bremond received the PhD degree
+from INRIA in video understanding in 1997, and
+he pursued his research work as a post doc-
+torate at the University of Southern California
+(USC) on the interpretation of videos taken from
+Unmanned Airborne Vehicle (UAV). In 2007, he
+received the HDR degree (Habilitation a Diriger
+des Recherches) from Nice University on Scene
+Understanding. He created the STARS team on
+the 1st of January 2012. He is the research
+director at INRIA Sophia Antipolis, France. He
+has conducted research work in video understanding since 1993 at
+Sophia- Antipolis. He is author or co-author of more than 140 scien-
+tific papers published in international journals or conferences in video
+understanding. He is a handling editor for MVA and a reviewer for
+several international journals (CVIU, IJPRAI, IJHCS, PAMI, AIJ, Eurasip,
+JASP) and conferences (CVPR, ICCV, AVSS, VS, ICVS). He has (co-
+)supervised 26 PhD theses. He is an EC INFSO and French ANR Expert
+for reviewing projects.
+
diff --git a/zNAyT4oBgHgl3EQf0vni/content/tmp_files/load_file.txt b/zNAyT4oBgHgl3EQf0vni/content/tmp_files/load_file.txt
new file mode 100644
index 0000000000000000000000000000000000000000..9067fd6e7b0917176a9a3d2fd3d032855c48ec1d
--- /dev/null
+++ b/zNAyT4oBgHgl3EQf0vni/content/tmp_files/load_file.txt
@@ -0,0 +1,1852 @@
+filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf,len=1851
+page_content='1 Learning Invariance from Generated Variance for Unsupervised Person Re-identification Hao Chen, Yaohui Wang, Benoit Lagadec, Antitza Dantcheva, Francois Bremond Abstract—This work focuses on unsupervised representation learning in person re-identification (ReID).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Recent self-supervised contrastive learning methods learn invariance by maximizing the representation similarity between two augmented views of a same image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' However, traditional data augmentation may bring to the fore undesirable distortions on identity features, which is not always favorable in id-sensitive ReID tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' In this paper, we propose to replace traditional data augmentation with a generative adversarial network (GAN) that is targeted to generate augmented views for contrastive learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' A 3D mesh guided person image generator is proposed to disentangle a person image into id-related and id-unrelated features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Deviating from previous GAN-based ReID methods that only work in id-unrelated space (pose and camera style), we conduct GAN-based augmentation on both id-unrelated and id-related features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' We further propose specific contrastive losses to help our network learn invariance from id-unrelated and id-related augmentations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' By jointly training the generative and the contrastive modules, our method achieves new state-of-the-art unsupervised person ReID performance on mainstream large-scale benchmarks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Index Terms—Person re-identification, image synthesis, representation disentanglement, data augmentation, contrastive learning !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 1 INTRODUCTION G IVEN an image of a target person, a person re- identification (ReID) system [1], [2] aims at matching images of the same person across non-overlapping cameras.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' With the help of human-annotated labels, supervised per- son ReID methods [3], [4] have yielded impressive results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' However, there usually exist strong domain gaps between different domains, such as illumination condition, camera property and scenario variation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' As shown in previous methods [5], [6], a ReID model trained on a specific domain is hard to generalize to other domains.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' One straightforward solution is to annotate and re-train the ReID model in a new domain, which is cumbersome and time-consuming for real- world deployments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Towards an automatic adaptive system, unsupervised person ReID [7], [8], [9] has attracted increasing attention in the research community.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Compared with su- pervised counterparts, unsupervised methods directly learn from unlabeled images and therefore entail better scalability in real-world deployments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Recent self-supervised contrastive learning studies [10], [11] have shown promising performance in unsupervised repre- sentation learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' By maximizing the representation sim- ilarity between two different views (augmented versions) of a same image, contrastive methods learn representations that are invariant to different conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' In this context, data augmentation plays a crucial role in mimicking real-world condition variance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Contrastive learning methods are able to build more robust representations, given they were pro- vided with better augmented views.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Previous methods gen- erally consider traditional data augmentation techniques, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Chen, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Wang, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Dantcheva and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Bremond are with Inria and Universit´e Cˆote d’Azur, 2004 Route des Lucioles, 06902 Val- bonne, France.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' E-mail: {hao.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='chen, yaohui.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='wang, antitza.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='dantcheva, fran- cois.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='bremond}@inria.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='fr B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Lagadec is with European Systems Integration, 362 Avenue du Cam- pon, 06110 Le Cannet, France.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' E-mail: benoit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='lagadec@esifrance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='net e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=', random flipping, cropping, color jittering, blurring and erasing [12].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' However, these random augmentation tech- niques may cause undesirable distortion to crucial identity information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' To overcome this issue, we propose to use a Generative Adversarial Network (GAN) [13] as an augmen- tation substitute, as it is able to disentangle a representation into id-related and id-unrelated features (see Table 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' More accurate augmented views can be obtained by modifying a certain factor while preserving other factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Previous GAN-based unsupervised ReID methods [14], [15], [16], [17] often treat unsupervised ReID as an unsu- pervised domain adaptation task, which attempts to adapt a model trained on a labeled source domain to an unla- beled target domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Under this setting, it is intuitive to use GAN-based style transfer [18], [19] to generate source domain images in the style of a target domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' A model can be re-trained on the generated images in target domain style with source domain labels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' However, unsupervised domain adaptation performance often strongly relies on quality and scale of the source domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Differently, we treat unsupervised ReID as a contrastive representation learning task, where the source domain is not mandatory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' To this end, we integrate a generative module and a contrastive module into a joint learning framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' For the generative module, we propose a 3D mesh based generator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Conventional pose transfer methods [20], [21] use 2D pose [22] to guide generation, not preserving body shape information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 3D mesh recovery [23] jointly estimates body shape, as well as 3D pose, which conserves more identity information for unsupervised ReID.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' We use 3D meshes to guide the generation, where generated images in new poses are then used as augmented views in the contrastive module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' For the contrastive module, we use a clustering al- gorithm to generate pseudo labels, aimed at maximizing representation similarity between different views of a same arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='00725v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='CV] 2 Jan 2023 2 TABLE 1 Id-related and Id-unrelated factors in a person image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Id-related Id-unrelated cloth color, pose, view-point, hair color, texture, illumination, camera style body shape background pseudo identity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Our model attracts a generated view to its original view, while repulsing the generated view from images of different identities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' The contrastive module per- mits an identity encoder to extract view-invariant identity features, which, in turn, improves the generation quality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' In our previous work [9], GAN-based augmentation was only conducted on id-unrelated features, which has been common practice in previous GAN-based ReID meth- ods [20], [24], [25].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Modifying id-unrelated features allows for learning identity features that are more invariant to id- unrelated variations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' In this paper, we explore the possibility of conducting GAN-based augmentation on the id-related features to further improve the ReID performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Inspired by Mixup [26] that interpolates two images to learn a smoother decision boundary between two classes, we pro- pose to interpolate disentangled id-related features inside the generative module, namely Disentangled Mixup (D- Mixup).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' As shown in Table 2, if two persons P1 and P2 re- spectively wear red and yellow clothes, an in-between iden- tity in orange clothes should be marked as 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5P1 + 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5P2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' However, in a dataset, such a person in orange clothes is normally labeled as a totally different identity P3, which hinders a network from learning the accurate relationship between different identities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Compared to traditional image- level Mixup [26] and feature-level Mixup [27], our proposed D-Mixup generates more accurate in-between identity im- ages, which are more suitable for fine-grained person ReID.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' In our D-Mixup, we try to make our network understand the mixed identity 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5P1 + 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5P2 is not related to id-unrelated features (pose and view-point), but only related to id-related features (cloth color).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' To summarize, our contributions include the following: We propose a 3D mesh guided generator to disentan- gle representations into id-related and id-unrelated features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Two novel data augmentation techniques are proposed respectively on id-unrelated and id- related features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' We propose Rotation Contrast and Mixup Contrast modules to respectively learn invariance from id- unrelated and id-related augmented views.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' We propose an enhanced joint generative and con- trastive learning framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' We comprehensively investigate how the generative and contrastive mod- ules mutually promote each other and contribute to unsupervised ReID performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Extensive experiments validate the superiority of proposed GAN-based augmentation over traditional augmentation for unsupervised person ReID.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Our method achieves new state-of-the-art unsupervised person ReID performance on mainstream image- based datasets, including Market-1501, DukeMTMC- reID and MSMT17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' TABLE 2 Interpolation results between two random persons P1 and P2 with image-level Mixup [26], feature-level Mixup (F-Mixup) [27] and our proposed disentangled Mixup (D-Mixup).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' To visualize results from F-Mixup, we follow AMR [28] to train a VAE-GAN for mixed image reconstruction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Our D-Mixup only interpolates disentangled identity features in the generation, which alleviates noise from mixed structural features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Inputs Mixup F-Mixup D-Mixup Image Image Label 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='0P1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='0P1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5P1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5P1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5P1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5P1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='0P2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='0P2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5P2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5P2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5P2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5P2 Our method can be also applied to video-based person ReID.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Our method significantly outperforms previous unsupervised video person ReID methods on MARS and DukeMTMC-VideoReID datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 2 RELATED WORK 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 Contrastive learning Contrastive learning [29] has shown impressive perfor- mance for un-/self-supervised representation learning [10], [11], [30], [31], [32], [33].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Such contrastive methods target at learning representations that are invariant to different distortions by attracting positive pairs, while repulsing neg- ative pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' For each image, a positive pair can be constituted by two augmented views, whereas all other images in a dataset are regarded as negative samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Contrastive learn- ing methods benefit from a set of well defined data aug- mentation techniques, which can mimic real-world image distortions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' For example, MoCo [11] used random cropping, color jitterring, horizontal flipping and grayscale conversion to obtain positive view pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' As an extension, MoCo- v2 [34] included blurring and stronger color distorsion, which enhanced the original method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' However, most of data augmentation settings in contrastive learning methods were designed for general image classification datasets, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=', ImageNet [35].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' These traditional augmentation techniques are not always suitable for color-sensitive person ReID, especially those that introduce strong color distorsion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 Data augmentation As a technique to constitute positive pairs, data augmen- tation plays an important role in contrastive learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Re- cently, GAN and Mixup have provided new approaches for data augmentation in person ReID.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 GAN-based augmentaion Zheng et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [36] unconditionally generated a lot of un- labeled person images with DCGAN [37] to enlarge data 3 volume for supervised ReID.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Following GAN-based meth- ods were usually conditionally conducted on some factors from Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 1) Pose: With the guidance of 2D poses, FD-GAN [20] and PN-GAN [38] generated a target per- son in new poses to learn pose-irrelevant representations for single-domain supervised ReID.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Similar pose transfer [21] was then proposed to address unsupervised domain adaptive (UDA) ReID.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 2) Dataset style (illumination): As a dataset is usually recorded in a uniform illumination condi- tion, PTGAN [14] and SyRI [15] used CycleGAN [39] to min- imize the domain gap between different datasets by generat- ing person images in the style of a target domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 3) Camera style: Instead of the general dataset style, CamStyle [24] transferred images captured from one camera into the style of another camera, in order to reduce inter-camera style gaps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Similar method [16] was then applied to UDA ReID.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 4) Background: SBSGAN [40] and CR-GAN [41] respectively were targeted at removing and switching the background of a person image to mitigate background influence for UDA ReID.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 5) General structure: By switching global and local level identity-unrelated features, IS-GAN [42] disentangled a representation into identity-related and identity-unrelated features without any concrete guidance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' As a concrete guid- ance, a gray-scaled image contains multiple id-unrelated factors of a person image, including pose, background and carrying structures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' By recoloring gray-scaled person images with the color distribution of other images, DG- Net [25] and DG-Net++ [17] learned disentangled identity representations invariant to structure factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Our proposed 3D mesh guided generator shares certain similarity with pose transfers and DG-Net++.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' However, both pose transfers and DG-Net++ lose body shape information, which can be conserved by 3D meshes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Moreover, as opposed to DG- Net++, we do not transfer style in a cross-domain manner, which allows our method to operate without a source do- main.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 Mixup Mixup [26] is a simple yet effective data augmentation technique that interpolates two samples and labels into one new in-between sample, which encourages a smoother decision boundary between two classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' The interpolation can be conducted between two images [26], [43], two feature representations [27] and two portions of different images [44].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Initially proposed for supervised image classi- fication [26], [43], Mixup has been successfully extended to semi-supervised learning [45], [46], unsupervised domain adaptation [47], as well as novel class discovery [48].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Aug- Mix [49] combines multiple augmented versions of an image into a mixed image and proves that such technique can enhance robustness on corrupted data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' CAIL [50] applies image-level Mixup between a source domain image and a target domain image to create a between-domain person image, which facilitates cross-domain knowledge transfer in unsupervised domain adaptive ReID.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' The above methods usually interpolate whole images or whole representations, resulting in noise from overlapping person structures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' To reduce noise from mixed person structures, we propose to interpolate only disentangled identity features, which is compatible with our proposed 3D mesh guided GAN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 Unsupervised person ReID Depending on the necessity of a large-scale labeled source dataset, unsupervised person ReID methods can be roughly categorized into unsupervised domain adaptive (UDA) and fully unsupervised ReID.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' We note that above mentioned GAN-based unsupervised ReID methods [14], [15], [16], [17], [21], [41] fall into the setting of UDA ReID.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Several works [51], [52] leveraged semantic attributes to facilitate the domain adaptation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Another prominent approach has to do with assigning pseudo labels to unlabeled images and conducting pseudo label learning [7], [8], [50], [53], [54], [55], [56].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Pseudo labels can be obtained by existing clus- tering algorithms, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=', K-means [8] and DBSCAN [17], [55], or newly designed pseudo labelling algorithms [53], [56].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Since the performance of UDA ReID is highly correlated to the scale and quality of a source domain, recent fully unsupervised ReID methods have attracted more attention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Most of previous fully unsupervised methods [57], [58], [59], [60], [61] were based on pure pseudo label learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Our previous method GCL [9] has entailed a hybrid GAN and pseudo label learning method, which is compatible with both UDA and fully unsupervised settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' We here propose a new id-related augmentation D-Mixup, which enhances our framework to achieve new state-of-the-art performance under both UDA and fully unsupervised settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 3 METHOD In this paper, we propose an enhanced joint Generative and Contrastive Learning (GCL+) for unsupervised person ReID.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' We define unsupervised ReID as a problem of learn- ing invariance from self-augmented variance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' As illustrated in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' (a), the proposed GCL+ constitutes of two modules: a generative module that provides GAN-based augmented views, as well as a contrastive module that learns invariance from augmented views.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' These two modules are coupled by a shared identity encoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' After the joint training, only the shared identity encoder is conserved for inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' In the following sections, we proceed to provide details related to both modules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' To facilitate the reading, we include a list of abbreviations in Supplementary Materials Section C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 Generative Module Our generative module is composed of 4 networks, in- cluding an identity encoder Eid, a structure encoder Estr, a decoder G and a discriminator D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Given an unlabeled person ReID dataset X = {x1, x2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=', xN}, we use the prominent algorithm HMR [23] to generate corresponding 3D meshes, which are then used as structure guidance in the generative module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' By recoloring a specific 3D mesh to reconstruct a real image, a person representation can be disentangled into identity and structure features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' We conduct data augmentation in two pathways: one on id- unrelated structure features with rotated meshes, the other one on identity features with D-Mixup.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 Mesh-guided Rotation (id-unrelated augmentation) As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' (b), given a person image and an estimated 3D mesh, we denote the 2D projection of the mesh as original structure sori.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' To mimic real-world camera ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='(a) General Architecture of GCL ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='𝑥𝑚𝑖𝑥 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='𝑥𝑛𝑒𝑤 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='Contrastive ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='Module ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='(b) Generative Module: ID-unrelated augmentation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='Generative ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='Module ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='𝑥 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='𝑠𝑛𝑒𝑤 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='𝐸𝑠𝑡𝑟 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='𝑠𝑜𝑟𝑖 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='𝐸𝑖𝑑 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='𝑥𝑖 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='𝐺 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='𝑥𝑗 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='𝐸𝑖𝑑 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='mix ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='𝑥𝑚𝑖𝑥 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='𝐷 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='𝐿𝑎𝑑𝑣 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='(c) Generative Module: ID-related augmentation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='Discriminator ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='𝐸𝑖𝑑 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='𝐸𝑠𝑡𝑟 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='𝐺 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='𝐷 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='Shared identity encoder ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='Structure encoder ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='Decoder ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='𝐿 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='Loss ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='mix ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='Mixup ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='𝐸𝑖𝑑 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='𝐸𝑠𝑡𝑟 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='𝐺 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='𝑠𝑜𝑟𝑖 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='𝑥 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='𝑥𝑜𝑟𝑖 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='𝐷 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='𝐸𝑠𝑡𝑟 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='𝐺 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='𝑠𝑛𝑒𝑤 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='𝑥𝑛𝑒𝑤 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='𝐸𝑖𝑑 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='𝐷 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='𝐿𝑓𝑒𝑎𝑡 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='𝐿𝑖𝑚𝑔 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='𝐿𝑎𝑑𝑣 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='𝐿𝑎𝑑𝑣 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='𝐺 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='𝑥𝑜𝑟𝑖 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='′′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='𝐷 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='𝐿𝑖𝑚𝑔 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='𝐿𝑎𝑑𝑣 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='𝐿𝑓𝑒𝑎𝑡 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='𝐸𝑖𝑑 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='(d) Contrastive Module: Rotation Contrast ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='𝑥𝑛𝑒𝑤 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='memory ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='𝑓𝑝𝑜𝑠 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='𝐿𝑣𝑖 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='𝑥 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='𝐿𝑣𝑖 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='𝐸𝑖𝑑 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='𝐿𝑣𝑖 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='′′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='𝐸𝑖𝑑 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='𝑥𝑖 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='𝑥𝑗 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='(e) Contrastive Module: Mixup Contrast ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 2 3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='mix ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='Pseudo label ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='𝐿𝑚𝑖𝑥 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 2 3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 2 3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='𝑥𝑚𝑖𝑥 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' (a) General architecture of GCL+: The framework is composed of a generative module (b, c) and a contrastive module (d, e), which are coupled by the shared identity encoder Eid.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' (b) Mesh rotation (id-unrelated augmentation) : The decoder G combines the identity features encoded by Eid and structure features Estr to generate an augmented view x′ new with a cycle consistency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' (c) D-mixup (id-related augmentation): The decoder G generates a identity-mixed augmented view x′ mix with the mixed identity features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' (d) Rotation Contrast: Viewpoint-invariance is enhanced by maximizing the agreement between original Eid(x), synthesized Eid(x′ new) and memory fpos representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' (e) Mixup Contrast: A smoother decision boundary can be learnt with x′ mix and the interpolated pseudo label.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' view-point, as shown in Table 3, we rotate the 3D mesh by 45°, 90°, 135°, 180°, 225°, 270° and 315° and randomly take one 2D projection from these rotated meshes as a new structure snew.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' The unlabeled image is encoded to identity features by the identity encoder Eid : x → fid, while both original and new structures are encoded to structure features by the structure encoder Estr : sori → fstr(ori), snew → fstr(new).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Combining both identity and structure features, the decoder generates synthesized im- ages G : (fid, fstr(ori)) → x′ ori, (fid, fstr(new)) → x′ new, where a prime is used to represent generated images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' As we do not have real images in new structures (paired data), a cycle consistency reconstruction [39] becomes in- dispensable for the generative module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' We encode the generated image in the new structure x′ new and decode once again to get synthesized images in original structures G(Eid(x′ new), sori) → x′′ ori, where double primes denote cycle-generated images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' We calculate a ℓ1 image reconstruc- tion loss between the original image x, the generated image x′ ori and the cycle-generated image: Limg = E[∥x − x′ ori∥1] + E[∥x − x′′ ori∥1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' (1) To enhance the disentanglement in the cycle consistency reconstruction, we also calculate a ℓ1 feature reconstruction loss: Lfeat = E[∥fid − Eid(x′ new)∥1]+ E[∥fid − Eid(x′′ ori)∥1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' (2) The discriminator D attempts to distinguish between real and generated images with adversarial losses: Ladv = E[log D(x) + log(1 − D(x′ ori))]+ E[log D(x) + log(1 − D(x′ new))]+ E[log D(x) + log(1 − D(x′′ ori))].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' (3) Remark.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 2, we can switch 2D gray images [17], [25], switch meshes between random persons or rotate one’s own mesh to introduce new structures as generation guidance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Although stronger pose and view- point variances can be introduced into generation, random 5 TABLE 3 Examples of 3D mesh guided generation on Market-1501 dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Each mesh is rotated by 45°, 90°, 135°, 180°, 225°, 270° and 315°.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 0° 45° 90° 135° 180° 225° 270° 315° → → → → switching hinders conservation of body shape information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' After testing, we find that the most appropriate way to preserve body shape and generate accurate images is Mesh rotation, which yields higher performance in Table 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 D-mixup (id-related augmentation) As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' (c), given two random person images xi and xj in a mini-batch, we encode the images into identity features Eid(xi) → fid(i) and Eid(xj) → fid(j).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' We follow the original Mixup [26] in using a Beta distribution with a hyper-parameter α to randomly sample a mixing coefficient λ: λ = Beta(α, α), λ∗ = max(λ, 1 − λ) fid(mix) = λ∗ · fid(i) + (1 − λ∗) · fid(j), (4) where λ∗ renders the mixed identity more similar to xi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' To conserve corresponding body shape information, we use the original structure of xi, rather than xj as the gener- ation guidance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' A mixed person image (see more inter- polated examples in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 3) can be generated by combin- ing mixed identity features and original structure features G(fid(mix), sori(i)) → x′ mix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' The discriminator D attempts to distinguish between real and mixed images with the adversarial loss: Ladv mix = E[log D(x) + log(1 − D(x′ mix))].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' (5) More discussion about feature regularization losses is provided in Supplementary Materials Section A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 Overall generative loss The overall GAN loss combines the above losses (1), (2), (3) and (5) with a weighting coefficient λrecon: Lgan = λrecon(Limg + Lfeat) + Ladv + Ladv mix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' (6) Mesh switch Mesh rotation 2D gray image switch Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Different ways of introducing structural variance (2D gray image switch [25], Mesh switch and Mesh rotation) into generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' TABLE 4 Performance comparison of rotating one mesh and switching two random meshes in the generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Method Duke→Market Market→Duke mAP Rank1 mAP Rank1 2D gray image switch [25] 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 Mesh switch 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='6 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 Mesh rotation 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='7 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='0 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 Contrastive Module The described generative module generates augmented views of a person image, which can form positive view pairs for the contrastive module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' By maximizing similarity be- tween positive pairs, the shared identity encoder is aimed at building robust representations that are invariant to distor- tions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' For one identity, there are commonly several positive images in the dataset, which are recorded in different poses, camera styles and backgrounds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Only maximizing similarity between an image and its self-augmented views leads to sub-optimal performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Moreover, previous methods [10], [11] have demonstrated the effectiveness of mining a large number of negative samples in contrastive learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' In order to mine more positives and a large number of negatives, we generate pseudo labels on a memory bank [30] that stores all representations M corresponding to dataset images X .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Given a representation f t in the current epoch, the corresponding memory bank representation M[i] is updated with a momentum hyper-parameter β: M[i]t = β · M[i]t−1 + (1 − β) · f t, (7) where M[i]t and M[i]t−1 respectively refer to the memory bank representations in the t and t − 1 epochs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' The mem- ory bank stores moving averaged representations, which stabilize the pseudo label generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' To further enhance the pseudo label quality, we compute k-reciprocal re-ranked Jaccard distance [62] between memory bank representations, which are then fed into a clustering algorithm DBSCAN [63] to generate pseudo labels Y = {y1, y2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=', yN}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' During the training, the pseudo labels are renewed at the beginning of each epoch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' We design a Rotation Contrast and a Mixup Contrast respectively for the two types of generated views.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 Rotation Contrast (for id-unrelated augmentation) As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' (d), the original image x and the generated image x′ new are encoded by the shared identity encoder into two identity feature vectors Eid(x) → f and Eid(x′ new) → f ′ new.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' For a representation f with a pseudo label yi, we randomly sample a positive representation fpos 6 𝑃1 𝑃2 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Linear interpolation of disentangled identity features between two persons respectively from Market-1501 and DukeMTMC-reID.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' of the same pseudo label yi and K negative representations of pseudo labels different to yi from the memory bank.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Three positive pairs can be formed, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=', (f, fpos), (f, f ′ new) and (fpos, f ′ new).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' The f ′ new and sampled K negative rep- resentations from the memory bank are used to form K negative pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' We define three view-invariant losses to attract three positive pairs while repulsing K negative pairs: Lvi = E[log (1 + �K i=1 exp (< f ′ new · ki > /τ) exp (< f · fpos > /τ) )], (8) L′ vi = E[log (1 + �K i=1 exp (< f ′ new · ki > /τ) exp (< f ′new · f > /τ) )], (9) L′′ vi = E[log (1 + �K i=1 exp (< f ′ new · ki > /τ) exp (< f ′new · fpos > /τ) )], (10) where < · > denotes the cosine similarity between two feature vectors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' τ is a temperature hyper-parameter to sharpen the cosine similarity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' ki denotes negative represen- tations sampled from the memory bank.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Presented three loss functions enable the contrastive module to maximize the similarity between original view f, generated view f ′ new and positive memory view fpos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' At the same time, the similarity between generated view f ′ new and K negative memory views is minimized, which encourages the genera- tive module to refine the generated view f ′ new that should be different from a large number of negative samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 Mixup Contrast (for id-related augmentation) The mixed image x′ mix is encoded by the shared identity encoder into a mixed identity feature vector Eid(x′ mix) → f ′ mix, see Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' (e).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Towards learning a smoother decision boundary between two clusters, as illustrated in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 4, we design a Mixup Contrast for f ′ mix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' As certain instances in a cluster are close to the decision boundary between two prototype 𝑷𝟏 𝑷𝟐 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='6𝑃1 + 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4𝑃2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4𝑃1 + 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='6𝑃2 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Mixup Contrast targets at learning a smoother decision boundary between two persons P1 and P2 by contrasting in-between samples with in-between prototypes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' clusters, whereas the others are far away, we define an averaged prototype for a cluster: pa = 1 Na � M[i]∈ya M[i], (11) where Na is the number of instances belonging to the cluster a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Given a random image representation f, we use a soft- max cross-entropy loss Lproto to make f converge to the cluster prototype, which encourages the compactness of a cluster: Lproto = E[log (1 + �|Y|−1 i=1 exp (f · pi) exp (f · p+) )], (12) where p+ is the corresponding prototype of f and pi denotes other cluster prototypes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' |Y| is the number of clusters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Given that certain clusters may contain more instances that are close to decision boundaries with other clusters, compact clusters provide stable mixed prototypes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Based on the pseudo labels, we define a mixed prototype vector between two clusters i and j: pmix = λ∗ · pi + (1 − λ∗) · pj, (13) where λ∗ is the same mixing coefficient as in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' (4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' For the mixed representation f ′ mix, we use another soft- max cross-entropy loss to maximize its similarity with the mixed prototype pmix and minimize its similarity with |Y| − 2 negative prototypes that do not belong to the two clusters i and j: Lmix = E[log (1 + �|Y|−2 i=1 exp (f ′ mix · pi) exp (f ′ mix · pmix) )].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' (14) As opposed to cosine similarity in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' (8), (9) and (10), we do not compute normalized similarity, as the average operation for computing prototype vectors performs as normalization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 Overall contrastive loss The overall contrastive loss combines the above losses (8), (9), (10), (12) and (14): Lcontrast = λvi(Lvi+L′ vi+L′′ vi)+λmix(Lproto+Lmix).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' (15) 7 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 Joint Training Our proposed framework incorporates a generative module and a contrastive module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' The generative module disentan- gles a person image representation into identity and struc- ture features, which allows for learning purified identity features for person ReID.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' The contrastive module learns invariance via contrasting augmented images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' If we replace the GAN-based augmentation with traditional data aug- mentation techniques, both modules can be trained sepa- rately.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' However, a separate training leads to sub-optimal performance for both of them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' To address this issue, we couple the two modules with a shared identity encoder in a joint training framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' In the setting of joint training, both modules work collaboratively to achieve one objective: en- hancing the discriminality of identity representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Inside GCL+, the generative module provides both, id-unrelated and id-related augmentations for the contrastive module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' On the other hand, the contrastive module maximizes the similarity between positive views, while repulsing negative views, which, in turn, refines the identity representations for a better generation quality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Both modules mutually promote each other’s performance in the joint training, leading to an optimal ReID performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' In our proposed framework, a forward propagation is firstly conducted on the generative module and subsequently on the contrastive module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' A backward propagation is then conducted with an overall loss that combines Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' (6) and Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' (15): Loverall = Lgan + Lcontrast.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' (16) 4 EXPERIMENT 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 Datasets and Evaluation Protocols We evaluate our proposed method GCL+ on five main- stream person ReID benchmarks, including three image- based datasets: Market-1501 [64], DukeMTMC-reID [65], MSMT17 [14] and two video-based datasets: MARS [66] and DukeMTMC-VideoReID [67].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Market-1501 dataset is collected in front of a supermarket in Tsinghua University from 6 cameras.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' It is composed of 12,936 images of 751 identities for training and 19,732 images of 750 identities for testing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' DukeMTMC-reID is collected from 8 cameras in- stalled in the campus of Duke University.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' It contains 16,522 images of 702 persons for training, 2,228 query images and 17,661 gallery images of 702 persons for testing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' MSMT17 is a large-scale Re-ID dataset, which includes 32,621 training images of 1,041 identities and 93,820 testing images of 3,060 identities collected from 15 cameras deployed in both indoor and outdoor scenes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' MARS is a large-scale video-based person ReID dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' The dataset contains 17,503 tracklets of 1,261 identities collected from 6 cameras, where 625 iden- tities are used for training and the other 636 identities are used for testing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' DukeMTMC-VideoReID is a video-based person ReID dataset derived from DukeMTMC [65] dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' DukeMTMC-VideoReID contains 2,196 training tracklets of 702 identities and 2,636 testing tracklets of other 702 identi- ties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' As our method includes a GAN and a contrastive module, we report results for both unsupervised person ReID and generation quality evaluations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' For unsupervised person ReID evaluation, we provide results under both, unsupervised domain adaptation and fully unsupervised settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' We report both, Cumulative Matching Character- istics (CMC) at Rank1, Rank5, Rank10 accuracies, as well as mean Average Precision (mAP) on the testing set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' For the generation quality evaluation, we conduct a qualitative comparison between our method and state-of-the-art meth- ods on generated images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 Implementation details We introduce implementation details pertained to network design and general training configurations, as well as three- step optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Network design.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Our network design related to the identity encoder Eid, the structure encoder Estr, the de- coder G and the discriminator D has been mainly inspired by [17], [25].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' In the following descriptions, we denote the size of feature maps in channel×height×width.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 1) Eid is an ImageNet [35] pre-trained ResNet50 [68] with slight modifications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' The original fully connected layer is replaced by a batch normalization layer and a fully connected em- bedding layer, which outputs identity representations f in 512×1×1 for the contrastive module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' In parallel, we add a part average pooling that outputs identity features fid in 2048×4×1 for the generative module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 2) Estr is composed of four convolutional and four residual layers, which output structure features fstr in 128×64×32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 3) G contains four residual and four convolutional layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Every residual layer contains two adaptive instance normalization layers [18] that transform fid into scale and bias parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 4) D is a multi-scale PatchGAN [19] discriminator at 64×32, 128×64 and 256×128.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' General training configurations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Our framework is im- plemented under Pytorch [69] and trained with one Nvidia V100 GPU.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' The inputs are resized to 256×128.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' We empir- ically set a large weight λrecon = 5 for reconstruction in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' (6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' With a batch size of 16, we use SGD to train Eid and Adam optimizer to train Estr, G and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Learning rate in Adam is set to 1 × 10−4 and 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 × 10−4 in SGD and are multiplied by 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 after 10 epochs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' DBSCAN maximal neighborhood distance is set to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 and minimal sample number is set to 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' The number of negatives K is 8192.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' For testing, Eid outputs representations f of dimension 512.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' For video-based person ReID, due to our GPU memory constraint, we randomly sample 2 frames per tracklet on MARS and 8 frames per tracklet on DukeMTMC-VideoReID for training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' For testing, all the frames from each tracklet are used to calculate a unified tracklet representation for similarity ranking.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Other settings are kept the same as image-based peron ReID settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Three-stage optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' To reduce the noise from imperfect generated images at early epochs, we train the four modules Eid, Estr, G and D in a three-stage opti- mization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Stage 1 Eid warm-up: we use a state-of-the-art unsupervised ReID method to warm up Eid, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=', ACT [55], MMCL [59] and JVTC [60].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Stage 2 Estr, G and D warm- up: we freeze Eid and warm up Estr, G, and D only with the overall GAN loss in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' (6) for 40 epochs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Stage 3 joint training: we bring in the memory bank and the pseudo labels to jointly train the whole framework with the overall loss in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' (16) for another 20 epochs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 8 74 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 74 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='6 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='7 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 1 Duke→Market 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 61 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 78 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 1 Market→Duke mAP Rank1 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Hyper-parameter analysis on α for mixup coefficient on Duke→Market and Market→Duke tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='7 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='6 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 74 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 β 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 74 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='7 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='7 89 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='03 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='06 τ mAP Rank1 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Hyper-parameter analysis on β for memory momentum and τ for contrastive temperature on Duke→Market task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 Unsupervised ReID Evaluation To validate the effectiveness of each component, we con- duct parameter analysis and ablation experiments with a JVTC [60] baseline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' As JVTC+ is the enhanced version of JVTC with a camera temporal distribution post-processing, the performance boost from the post-processing is almost fixed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Thus, the ablation experiments show similar vari- ance with JVTC and JVTC+ baselines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' We further compare our method with state-of-the-art unsupervised person ReID with three different baselines to show the generalizability of our method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='6 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='7 89 89 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 74 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 74 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='7 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 Rank1 mAP 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='7 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='6 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='7 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='7 3 4 5 6 7 𝝀𝒗𝒊 𝝀𝒊𝒅 𝝀𝒓𝒆𝒄𝒐𝒏 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Hyper-parameter analysis on balancing coefficients λrecon for reconstruction weight, λvi for rotation contrast weight and λmix for mixup contrast weight on Duke→Market task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' TABLE 5 Performance under different clustering neighborhood distance threshold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' ‘N’ is the approximate number of pseudo-identities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Threshold Duke→Market Market→Duke N mAP Rank1 N mAP Rank1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 ∼642 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 ∼840 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='45 ∼605 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 ∼810 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 ∼584 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='7 ∼786 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='55 ∼540 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='6 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 ∼744 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='6 ∼500 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='6 ∼697 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='7 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='7 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 Parameter analysis Hyper-parameters, such as mixing coefficient α, memory momentum β and view-invariant contrastive loss temper- ature τ, play important roles inside our proposed GCL+ framework for better unsupervised person ReID perfor- mance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' We vary their values to analyze the sensitivity of each hyper-parameter inside our proposed framework GCL+.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' For Beta distribution, a larger α results in a higher pos- sibility that λ gets closer to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' ReID performance on both Duke→Market and Market→Duke tasks with reference to α is reported in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' On both tasks, the optimal performance is achieved, in case of α is around 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' As a consequence, α is set to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='6 in our framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' The value of β controls the memory updating speed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' The value of τ amplifies the cosine similarity between con- trastive views.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' An overlarge or undersized value, generally speaking, introduces more noise for contrastive learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' We report the performance variation with reference to β and τ on Duke→Market task in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' We find that the performance is more sensitive to the similarity temperature τ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Based on the results, we set β to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 and τ to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='04.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' The number of possible pseudo-identities N is related to clustering hyper-parameters, such as maximal neigh- borhood distance threshold and minimal cluster sample number.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' The distance threshold of DBSCAN is the maximal distance between two samples for one to be considered as in the neighborhood of the other.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' A larger distance threshold enlarges the radius of a cluster, making more samples be considered into a same cluster (N becomes smaller).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' As shown in Table 5, the threshold value only slightly affects ReID performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' As our framework jointly optimize the generative and contrastive modules, we set weight coefficients to balance different loss functions in the two modules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' We vary the balancing coefficients λrecon, λvi and λmix in Equation (6) and (15).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' The corresponding results are reported in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Overall, the different values in the tested range only slightly influence the final results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Based on the results, we set λrecon = 5, λvi = 1 and λmix = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 Ablation study Contrastive learning methods strongly rely on data aug- mentation to create different augmented views for con- trasting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Our proposed GCL+ outperforms traditional con- trastive learning methods by replacing traditional data aug- mentation techniques with GAN-based augmentation tech- niques.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' To validate the effectiveness of our proposed GAN- based augmentation techniques and contrastive losses, we conduct ablation experiments on both Market-1501 and DukeMTMC-reID datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Data augmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Data augmentation techniques can be caterogized into id-unrelated and id-related augmen- tation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Id-unrelated augmentation creates intra-image vi- sual distortions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' In contrast, id-related augmentation cre- ates inter-image visual distortions, which affects image identities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' We compare results of traditional and genera- tive data augmentation under fully unsupervised setting and domain adaptation setting in Table 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' For traditional data augmentation, we use multiple popular person ReID 9 TABLE 6 Ablation study under fully unsupervised and UDA settings on traditional (w/o GAN) and generative (w/ GAN) data augmentation for the contrastive module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' ‘Multi’ refers to multiple commonly used data augmentation techniques for person ReID, including random flipping, padding, cropping and erasing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' ‘Rotation’ refers to our proposed mesh-guided rotation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' ‘Mixup’ is conducted on image level, while ‘F-Mixup’ is conducted on feature level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Fully unsupervised ID-unrelated ID-related Market Duke Multi Rotation Mixup F-Mixup D-Mixup mAP R1 R5 R10 mAP R1 R5 R10 w/o GAN Baseline 47.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='7 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='6 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='0 ✓ 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='0 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 ✓ ✓ 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='0 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='6 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='0 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='0 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 w/ GAN ✓ 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='7 ✓ ✓ 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='6 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 ✓ ✓ 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='6 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='7 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 ✓ ✓ 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='6 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='6 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='6 UDA ID-unrelated ID-related Duke→Market Market→Duke Multi Rotation Mixup F-Mixup D-Mixup mAP R1 R5 R10 mAP R1 R5 R10 w/o GAN Baseline 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='0 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='7 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 ✓ 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='0 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 ✓ ✓ 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='7 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='7 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='0 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='0 w/ GAN ✓ 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='7 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 ✓ ✓ 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='0 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 ✓ ✓ 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='7 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='7 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 ✓ ✓ 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='7 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='7 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='0 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 TABLE 7 Ablation study on three view-invariant losses in Rotation Contrast and two prototype losses in Mixup Contrast.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Lvi L′ vi L′′ vi Lproto Lmix Duke→Market Market→Duke mAP R1 mAP R1 ✓ 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='6 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='7 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='6 ✓ ✓ 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='6 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 ✓ ✓ ✓ 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='7 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 ✓ ✓ ✓ ✓ 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='6 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 ✓ ✓ ✓ ✓ ✓ 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='7 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='0 75% 80% 85% 90% 1 3 5 7 9 11 13 15 17 19 Trad Rot Full Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Normalized Mutual Information (NMI) during 20 joint training epochs on Market-1501.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' ‘Trad’ refers to traditional data augmentation techniques.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' ‘Rot’ refers to id-unrelated mesh-guided rotation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' ‘Full’ refers to combining id-unrelated mesh-guided rotation and id-related D-Mixup.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' data augmentation techniques, including random flipping, padding, cropping and erasing [12], as id-unrelated aug- mentation and Mixup [26] as id-related augmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Even with these traditional data augmentation, our contrastive module significantly outperforms the baseline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' When we replace traditional data augmentation with generative data augmentation, the unsupervised person ReID performance can be further improved.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Our proposed mesh-guided rota- tion (Rotation) works better than the multiple commonly used data augmentation techniques (Multi) for id-unrelated augmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Meanwhile, our proposed D-Mixup achieves better performance than the image-level Mixup and feature- level Mixup (F-Mixup) for id-related augmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Effects on pseudo labels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Robust identity representa- tions should have a better intra-class compactness and inter- class separability, which leads to better pseudo label quality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' We evaluate our pseudo label quality by measuring the Normalized Mutual Information (NMI) [71] between our pseudo labels and ground truth labels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' As illustrated in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 8, traditional data augmentation (Trad) works well at the beginning, but ends up in a worse quality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' We argue that traditional data augmentation brings to the fore undesirable distortions on identity features, which easily leads to over- fitting for id-sensitive tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Deviating from that, GAN- based augmentation introduces more noise at the beginning, however avoids over-fitting in the final training epochs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' In addition, our full GCL+ (Full) conducts both GAN-based id-unrelated and id-related augmentation, which achieves better pseudo label quality than only id-unrelated mesh- guided rotation (Rot).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Contrastive loss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' To learn maximal invariance from gen- erated image and memory stored image, we have formed three positive pairs for Rotation Contrast, namely (f, fpos), (f, f ′ new) and (fpos, f ′ new).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' By maximizing the similarity be- tween these three positive pairs in Equation (8), (9) and (10), our objective is to build identity representations, which are invariant to instance-level pose, view-point and background variance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Meanwhile, we use identity prototypes and mixed prototypes in Mixup Contrast to learn a smoother class-level decision boundary with Equation (12) and (14).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' To confirm the contribution from these contrastive losses, we gradually add each into our framework and report the corresponding results in Table 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' The results indicate that our proposed contrastive losses effectively contribute to learning robust representations for unsupervised person ReID.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 Comparison with state-of-the-art methods Image-based person ReID.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' We compare our proposed GCL+ with state-of-the-art unsupervised ReID methods under three purely unsupervised and four unsupervised domain adaptation evaluation protocols.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' We evaluate the performance of GCL+ with different baselines, including MMCL [59], JVTC [60] and ACT [55], to demonstrate the generalizability of our proposed method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Under the fully unsupervised setting, we report as- sociated results on Market-1501, DukeMTMC-reID and MSMT17 dataset in Table 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' We firstly provide results of state-of-the-art methods, including BUC [57], SoftSim [58], TSSL [61], MMCL [59], JVTC [60], JVTC+ [60], Meta- Cam [70], as well as our previous work GCL [9], on the three datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Our proposed method GCL+ significantly improves the unsupervised person ReID performance from 10 TABLE 8 Comparison of fully unsupervised ReID methods (%) on Market1501, DukeMTMC-reID and MSMT17 datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' We test our proposed method on several baselines, see names in parentheses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Method Reference Market1501 DukeMTMC-reID MSMT17 mAP R1 R5 R10 mAP R1 R5 R10 mAP R1 R5 R10 BUC [57] AAAI’19 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='6 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 SoftSim [58] CVPR’20 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='7 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='6 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 TSSL [61] AAAI’20 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 MMCL [59] CVPR’20 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='0 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 JVTC [60] ECCV’20 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='7 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='6 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='0 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='6 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='0 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 JVTC+ [60] ECCV’20 47.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='7 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='6 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 MetaCam [70] CVPR’21 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='7 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 GCL(MMCL) [9] CVPR’21 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='7 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='6 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='0 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='7 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='7 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 GCL(JVTC) [9] CVPR’21 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='7 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='6 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='0 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='0 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='6 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 GCL(JVTC+) [9] CVPR’21 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='7 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='6 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 GCL+(MMCL) This paper 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='0 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='0 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='7 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 GCL+(JVTC) This paper 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='6 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='6 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='6 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='7 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 GCL+(JVTC+) This paper 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='0 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='6 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='0 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='0 47.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 TABLE 9 Comparison of unsupervised domain adaptive ReID methods (%) between Market1501, DukeMTMC-reID and MSMT17 datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' We test our proposed method on several baselines, see names in parentheses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Method Reference Duke→Market Market→Duke Market→MSMT17 Duke→MSMT17 mAP R1 R5 R10 mAP R1 R5 R10 mAP R1 R5 R10 mAP R1 R5 R10 ECN [7] CVPR’19 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='0 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='6 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='6 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 PDA [21] ICCV’19 47.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='6 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='0 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 CR-GAN [41] ICCV’19 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='0 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='7 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='7 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='7 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='6 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='7 SSG [54] ICCV’19 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='0 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='0 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='0 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='6 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='6 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='6 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 MMCL [59] CVPR’20 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='0 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='0 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='7 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='6 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 ACT [55] AAAI’20 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='6 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 DG-Net++ [17] ECCV’20 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='7 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='7 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 JVTC [60] ECCV’20 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='0 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='0 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='0 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 ECN+ [56] TPAMI’20 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='0 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='7 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='7 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='0 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 JVTC+ [60] ECCV’20 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='6 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 MMT [8] ICLR’20 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='7 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='0 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 CAIL [50] ECCV’20 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='7 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='7 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='0 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 MetaCam [70] CVPR’21 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='0 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 GCL(ACT) [9] CVPR’21 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='7 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='6 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='6 GCL(JVTC) [9] CVPR’21 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='0 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='6 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='0 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 GCL(JVTC+) [9] CVPR’21 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='6 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='6 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='0 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='7 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 GCL+(ACT) This paper 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='6 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 GCL+(JVTC) This paper 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='7 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='7 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='0 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='0 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='6 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='7 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 GCL+(JVTC+) This paper 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='6 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='6 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='6 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 the three baselines MMCL, JVTC and JVTC+.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' The proposed new D-Mixup and Mixup Contrast in our framework GCL+ consistently surpasses the performance of our previous work GCL with the three different baselines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' With the strong baseline JVTC+, our method achieves state-of-the-art perfor- mance on the three datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Under the unsupervised domain adaptation setting, we report related results on four mainstream benchmarks, in- cluding Duke→Market, Market→Duke, Market→MSMT17 and Duke→MSMT17 in Table 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Our proposed method GCL+ additionally achieves better performance than state- of-the-art methods, including ECN [7], PDA [21], CR-GAN [41], SSG [54], MMCL [59], ACT [55], DG-Net++ [17], JVTC [60], ECN+ [56], JVTC+ [60], MMT [8], CAIL [50], Meta- Cam [70], as well as our previous work GCL [9].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Among these methods, PDA, CR-GAN and DG-Net++ share certain similarity with our proposed method GCL+, in that they are based on GAN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' However, PDA and DG-Net++ used either 2D skeleton or random gray-scaled images as guid- ance, which could not preserve body shape information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Further, PDA, CR-GAN and DG-Net++ did not manipulate identity features to generate in-between identity images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' CAIL [50] has considered cross-domain Mixup, where in- terpolated structures may introduce more noise on identity features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Our proposed D-Mixup does not suffer from such interpolated structures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' In addition, cross-domain Mixup interpolates images from two domains, while our proposed D-Mixup interpolates intra-domain images, which is more flexible for fully unsupervised ReID.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Video-based person ReID.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' We compare our proposed GCL+ with state-of-the-art unsupervised video person ReID methods on MARS and DukeMTMC-VideoReID datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' RACE [72] and EUG [67] leverage a labeled video tracklet per identity to initialize their models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' These one-example video-based ReID methods can not actually be considered as unsupervised.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' DAL [73], TAUDL [74] and UTAL [75] utilize camera labels of each tracklet and try to associate tracklets of a same person across different cameras.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' OIM [76], BUC [57] and TSSL [61] are fully unsupervised video person ReID methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' We use the fully unsupervised method BUC as our baseline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' As shown in Table 10, our proposed methods GCL (view-point augmentation) and GCL+ (view-point and in-between identity augmentation) significantly outperform previous unsupervised video-based person ReID methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 11 TABLE 10 Comparison with the state-of-the-art methods on two video-based re-ID datasets, MARS and DukeMTMC-VideoReID.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' The “Labels” column indicates the labels used in each method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' “OneEx” denotes the one-example annotation per identity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' “Camera” refers to camera annotation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' “Baseline (BUC)” refers to our reproduced results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Method Labels MARS DukeMTMC-VideoReID mAP R1 R5 R10 mAP R1 R5 R10 RACE [72] OneEx 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 EUG [67] OneEx 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='6 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='7 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 DAL [73] Camera 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='0 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 TAUDL [74] Camera 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 UTAL [75] Camera 35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 OIM [76] None 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='7 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 BUC [57] None 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='7 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='7 TSSL [61] None 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='6 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 Baseline (BUC [57]) None 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='0 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='6 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='0 GCL None 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='6 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='8 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='0 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='7 GCL+ None 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='7 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='9 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='5 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4 Generation Quality Evaluation 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='1 Ablation study We conduct a qualitative ablation study, represented in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 9 to demonstrate that our proposed contrastive module can improve generative quality for person image generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Unconditional GANs learn a data distribution via recon- struction and adversarial training of each image, which then generate new images that fit the learned distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' However, unconditional GANs generate from features of a single image and neglect the shared features of different images of one person (or class).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Conditional GANs generally use human-annotated identity labels to learn shared class- level features, which are more view-invariant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Our pro- posed GCL+ introduces an unsupervised way to learn view- invariant class-level features for person image generation by contrasting pseudo positive views.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' We illustrate two examples respectively from the Market-1501 and DukeMTMC-reID datasets in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 9 to validate the effectiveness of our proposed contrastive mod- ule for person image generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Given a target person, a robust identity representation should contain salient fea- tures shared by the majority of observations in different view-points and poses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' In the case that GCL+ is trained without Lcontrast, our generative module tends to focus only on salient features of original image (black backpack for the first example and blue jacket for the second example), while neglecting salient features of other images of the same person (yellow t-shirt for the first example and red backpack for the second example).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' The contrastive module ensures the consistency of identity features for generation in different poses and view-points.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='2 Comparison with state-of-the-art methods We conduct a qualitative comparison between our pro- posed method GCL+ and state-of-the-art GAN-based per- son ReID methods, including FD-GAN [20], IS-GAN [42], DG-NET [25] and DG-NET++ [17].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' We re-implement these GAN-based person ReID methods based on their published source code and generate six images per real image of the Market-1501 dataset, as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' FD-GAN, IS-GAN and DG-Net are supervised methods, which rely on human-annotated labels to learn robust identity-level features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' We observe that images generated by FD-GAN and IS-GAN suffer from evident visual blur, which may lose detailed identity information after generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Compared + + 𝑓𝑖𝑑 𝑓𝑠𝑡𝑟 w/o 𝐿𝑐𝑜𝑛𝑡𝑟𝑎𝑠𝑡 w/ 𝐿𝑐𝑜𝑛𝑡𝑟𝑎𝑠𝑡 + + Same ID example Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Qualitative ablation study on the effectiveness of contrastive loss in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' (15) for generation quality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Lcontrast allows for preserving salient features from other views (yellow t-shirt for the first example and red backpack for the second example) in identity representations for generation in different poses and view-points.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' TABLE 11 Examples of 3D mesh guided generation on DukeMTMC-reID dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 0° 45° 90° 135° 180° 225° 270° 315° → → 12 FD-GAN GCL+(ours) IS-GAN DG-Net Real DG-Net++ Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Comparison of generated images on Market-1501 dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Examples of FD-GAN, IS-GAN, DG-Net, DG-Net++ and GCL+ are generated from same real images shown in the figure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' We note that DG-Net++ and GCL+ are unsupervised methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' TABLE 12 Examples of 3D mesh guided generation on MSMT17 dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 0° 45° 90° 135° 180° 225° 270° 315° → → to FD-GAN and IS-GAN, DG-Net can generate sharper images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' However, using randomly switched gray-scaled im- ages as guidance is prone to result in incoherent body shape and carrying.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' More comparison on the generative quality between FD-GAN, IS-GAN, DG-Net and our method is provided in Supplementary Materials Section B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' As an UDA method, DG-Net++ uses cross-domain gray-scaled images as guidance, which, however, shares same problems in gen- eration as DG-Net.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Different from DG-Net++, our proposed GCL+ is a fully unsupervised ReID method, which directly augments data diversity in the target domain without the need for a labeled source domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Moreover, an image in GCL+ is generated from its own rotated mesh, which helps to conserve body shape information and does not add extra carrying structures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' The generated images from GCL+ have higher quality and similarity to real images than other methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' To validate the generative quality on DukeMTMC- reID and MSMT17 datasets, we provide more examples in Table 11 and Table 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Consistency in the id-related space and variance in the id-unrelated space validate the purity (disentanglement quality) of identity representations in our framework GCL+.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' We further provide tracklet examples before and after our view-point rotation for video-based person ReID in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' The results show that our method also works well for video-based person ReID.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='3 Failure case analysis We show some failure cases from the rotation generative model in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Actually, when there exists inconsistent front-side and back-side patterns, the rotation-based genera- tion can hardly generate accurate images after large rotation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Rotate Rotate MARS tracklet DukeMTMC-VideoReID tracklet Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Examples of tracklet frames before and after our view-point ro- tation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Tracklets are respectively sampled from MARS and DukeMTMC- VideoReID datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' For example, the model may consider visual patterns only in the back side (backpack in the first row) and patterns only in the front side (carrying objects in the second row) as whole-body appearance features for generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' One possi- ble solution is to use a 3D human-object arrangement mesh generator [77] to help the generative model distinguish humans and objects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 5 CONCLUSION In this paper, we propose an enhanced joint generative and contrastive learning (GCL+) framework for unsupervised person ReID.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' The framework is composed of a generative module for data augmentation, as well as a contrastive module aimed at learning invariance from generated variance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' For the generative module, we propose a 3D mesh guided GAN to realize id-unrelated and id-related augmentation by respec- tively rotating 3D meshes as generation guidance and in- terpolating two identity representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' For the contrastive 13 0° 45° 90° 135° 180° 225° 270° 315° real Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Failure cases of rotation-based generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' First row: the back- pack can be generated onto the front side.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Second row: the carrying object can be generated onto the back side.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' module, we design Rotation Contrast and Mixup Contrast, re- spectively for the two data augmentation techniques to learn robust identity representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Extensive experiments are conducted to validate the superiority of the proposed GAN- based augmentation over traditional augmentation tech- niques for contrastive representation learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' The genera- tive module benefits from learned robust identity represen- tations that preserve fine-grained identity information for better generation quality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' GCL+ outperforms state-of-the-art methods under both, fully unsupervised and unsupervised domain adaptation settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Moreover, our contrastive mod- ule can be regarded as a contrastive discriminator in a GAN, which provides a new unsupervised approach for identity- preserving person image generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' ACKNOWLEDGMENTS This work has been supported by the French government, through the 3IA Cˆote d’Azur Investments in the Future project managed by the National Research Agency (ANR) with the reference number ANR-19-P3IA-0002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' The authors are grateful to the OPAL infrastructure from Universit´e Cˆote d’Azur for providing resources and support.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' REFERENCES [1] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Ye, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Shen, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Lin, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Xiang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Shao, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Hoi, “Deep learning for person re-identification: A survey and outlook,” IEEE TPAMI, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [2] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Karanam, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Gou, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Wu, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Rates-Borras, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Camps, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Radke, “A systematic evaluation and benchmark for person re- identification: Features, metrics, and datasets,” IEEE TPAMI, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [3] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Sun, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Zheng, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Yang, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Tian, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Wang, “Beyond part models: Person retrieval with refined part pooling (and a strong convolutional baseline),” in ECCV, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [4] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Chen, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Lagadec, and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Bremond, “Learning discriminative and generalizable representations by spatial-channel partition for person re-identification,” in WACV, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [5] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Song, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Yang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='-Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Song, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Xiang, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Hospedales, “Gen- eralizable person re-identification by domain-invariant mapping network,” in CVPR, June 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [6] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Jin, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Lan, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Zeng, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Chen, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Zhang, “Style normaliza- tion and restitution for generalizable person re-identification,” in CVPR, June 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [7] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Zhong, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Zheng, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Luo, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Li, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Yang, “Invariance matters: Exemplar memory for domain adaptive person re-identification,” in CVPR, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [8] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Ge, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Chen, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Li, “Mutual mean-teaching: Pseudo la- bel refinery for unsupervised domain adaptation on person re- identification,” in ICLR, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [9] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Chen, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Wang, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Lagadec, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Dantcheva, and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Bremond, “Joint generative and contrastive learning for unsupervised per- son re-identification,” in CVPR, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [10] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Chen, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Kornblith, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Norouzi, and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Hinton, “A simple framework for contrastive learning of visual representations,” in ICML, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [11] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' He, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Fan, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Wu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Xie, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Girshick, “Momentum contrast for unsupervised visual representation learning,” in CVPR, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [12] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Zhong, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Zheng, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Kang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Li, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Yang, “Random erasing data augmentation,” in AAAI, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [13] I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Goodfellow, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Pouget-Abadie, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Mirza, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Xu, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Warde-Farley, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Ozair, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Courville, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Bengio, “Generative adversarial nets,” in NeurIPS, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [14] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Wei, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Zhang, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Gao, and Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Tian, “Person transfer gan to bridge domain gap for person re-identification,” in CVPR, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [15] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Bak, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Carr, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='-F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Lalonde, “Domain adaptation through synthesis for unsupervised person re-identification,” in ECCV, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [16] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Zhong, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Zheng, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Li, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Yang, “Generalizing a person retrieval model hetero- and homogeneously,” in ECCV, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [17] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Zou, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Yang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Yu, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Kumar, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Kautz, “Joint disentangling and adaptation for cross-domain person re- identification,” in ECCV, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [18] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Huang and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Belongie, “Arbitrary style transfer in real-time with adaptive instance normalization,” in ICCV, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [19] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Isola, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Zhu, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Zhou, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Efros, “Image-to-image translation with conditional adversarial networks,” in CVPR, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [20] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Ge, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Li, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Zhao, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Yin, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Yi, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Wang, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Li, “Fd- gan: Pose-guided feature distilling gan for robust person re- identification,” in NeurIPS, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [21] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='-J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Li, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='-S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Lin, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='-B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Lin, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Wang, “Cross-dataset person re-identification via unsupervised pose disentanglement and adaptation,” in ICCV, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [22] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Cao, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Simon, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='-E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Wei, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Sheikh, “Realtime multi-person 2d pose estimation using part affinity fields,” in CVPR, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [23] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Kanazawa, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Black, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Jacobs, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Malik, “End-to-end recovery of human shape and pose,” in CVPR, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [24] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Zhong, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Zheng, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Zheng, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Li, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Yang, “Camera style adaptation for person re-identification,” in CVPR, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [25] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Zheng, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Yang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Yu, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Zheng, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Yang, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Kautz, “Joint discriminative and generative learning for person re- identification,” in CVPR, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [26] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Zhang, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Cisse, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Dauphin, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Lopez-Paz, “mixup: Beyond empirical risk minimization,” in ICLR, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [27] V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Verma, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Lamb, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Beckham, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Najafi, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Mitliagkas, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Lopez- Paz, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Bengio, “Manifold mixup: Better representations by interpolating hidden states,” in ICML, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [28] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Beckham, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Honari, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Verma, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Lamb, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Ghadiri, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Hjelm, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Bengio, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Pal, “On adversarial mixup resynthesis,” NeurIPS, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [29] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Hadsell, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Chopra, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' LeCun, “Dimensionality reduction by learning an invariant mapping,” in CVPR, 2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [30] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Wu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Xiong, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Yu, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Lin, “Unsupervised feature learning via non-parametric instance discrimination,” in CVPR, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [31] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Caron, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Misra, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Mairal, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Goyal, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Bojanowski, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Joulin, “Unsupervised learning of visual features by contrasting cluster assignments,” in NeurIPS, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [32] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='-B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Grill, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Strub, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Altch´e, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Tallec, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Richemond, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Buchatskaya, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Doersch, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Pires, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Guo, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Azar et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=', “Bootstrap your own latent: A new approach to self- supervised learning,” in NeurIPS, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [33] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Chen and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' He, “Exploring simple siamese representation learning,” in CVPR, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [34] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Chen, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Fan, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Girshick, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' He, “Improved baselines with momentum contrastive learning,” arXiv preprint arXiv:2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='04297, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [35] O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Russakovsky, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Deng, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Su, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Krause, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Satheesh, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Ma, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Huang, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Karpathy, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Khosla, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Bernstein, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Berg, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Fei-Fei, “Imagenet large scale visual recognition challenge,” IJCV, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [36] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Zheng, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Zheng, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Yang, “Unlabeled samples generated by gan improve the person re-identification baseline in vitro,” in ICCV, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [37] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Radford, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Metz, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Chintala, “Unsupervised represen- tation learning with deep convolutional generative adversarial networks,” in ICLR, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [38] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Qian, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Fu, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Xiang, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Wang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Qiu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Wu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='-G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Jiang, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Xue, “Pose-normalized image generation for person re- identification,” in ECCV, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 14 [39] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Zhu, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Park, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Isola, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Efros, “Unpaired image-to- image translation using cycle-consistent adversarial networks,” in CVPR, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [40] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Huang, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Wu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Xu, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Zhong, “Sbsgan: Suppression of inter-domain background shift for person re-identification,” in ICCV, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [41] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Chen, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Zhu, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Gong, “Instance-guided context rendering for cross-domain person re-identification,” in ICCV, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [42] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Eom and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Ham, “Learning disentangled representation for robust person re-identification,” in NeurIPS, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [43] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Tokozume, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Ushiku, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Harada, “Between-class learning for image classification,” in CVPR, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [44] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Yun, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Han, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Oh, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Chun, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Choe, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Yoo, “Cutmix: Regularization strategy to train strong classifiers with localizable features,” in ICCV, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [45] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Berthelot, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Carlini, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Goodfellow, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Papernot, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Oliver, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Raffel, “Mixmatch: A holistic approach to semi-supervised learning,” in NeurIPS, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [46] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Berthelot, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Carlini, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Cubuk, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Kurakin, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Sohn, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Zhang, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Raffel, “Remixmatch: Semi-supervised learn- ing with distribution matching and augmentation anchoring,” in ICLR, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [47] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Xu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Zhang, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Ni, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Li, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Wang, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Tian, and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Zhang, “Adversarial domain adaptation with domain mixup,” in AAAI, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [48] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Zhong, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Zhu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Luo, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Li, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Yang, and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Sebe, “Openmix: Reviving known knowledge for discovering novel visual cate- gories in an open world,” in CVPR, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [49] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Hendrycks, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Mu, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Cubuk, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Zoph, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Gilmer, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Lak- shminarayanan, “Augmix: A simple data processing method to improve robustness and uncertainty,” in ICLR, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [50] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Luo, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Song, and Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Zhang, “Generalizing person re- identification by camera-aware invariance learning and cross- domain mixup,” in ECCV, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [51] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Wang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Zhu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Gong, and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Li, “Transferable joint attribute- identity deep learning for unsupervised person re-identification,” CVPR, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [52] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Lin, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Li, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='-T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Li, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Kot, “Multi-task mid-level feature alignment network for unsupervised cross-dataset person re-identification,” in BMVC, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [53] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='-X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Yu, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Zheng, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Wu, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Guo, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Gong, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Lai, “Unsu- pervised person re-identification by soft multilabel learning,” in CVPR, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [54] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Fu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Wei, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Wang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Zhou, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Shi, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Huang, “Self-similarity grouping: A simple unsupervised cross domain adaptation approach for person re-identification,” in ICCV, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [55] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Yang, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Li, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Zhong, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Luo, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Sun, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Cheng, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Guo, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Huang, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Ji, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Li, “Asymmetric co-teaching for unsuper- vised cross-domain person re-identification.” in AAAI, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [56] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Zhong, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Zheng, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Luo, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Li, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Yang, “Learning to adapt invariance in memory for person re-identification,” IEEE TPAMI, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [57] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Lin, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Dong, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Zheng, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Yan, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Yang, “A bottom-up clustering approach to unsupervised person re-identification,” in AAAI, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [58] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Lin, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Xie, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Wu, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Yan, and Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Tian, “Unsupervised person re-identification via softened similarity learning,” in CVPR, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [59] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Wang and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Zhang, “Unsupervised person re-identification via multi-label classification,” in CVPR, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [60] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Li and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Zhang, “Joint visual and temporal consistency for unsupervised domain adaptive person re-identification,” in ECCV, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [61] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Wu, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Zhu, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Gong, “Tracklet self-supervised learning for unsupervised person re-identification.” in AAAI, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [62] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Zhong, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Zheng, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Cao, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Li, “Re-ranking person re- identification with k-reciprocal encoding,” in CVPR, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [63] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Ester, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='-P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Kriegel, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Sander, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Xu, “A density-based algorithm for discovering clusters in large spatial databases with noise,” in KDD, 1996.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [64] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Zheng, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Shen, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Tian, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Wang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Wang, and Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Tian, “Scalable person re-identification: A benchmark,” ICCV, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [65] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Ristani, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Solera, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Zou, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Cucchiara, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Tomasi, “Per- formance measures and a data set for multi-target, multi-camera tracking,” in ECCVW, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [66] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Zheng, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Bie, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Sun, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Wang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Su, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Wang, and Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Tian, “Mars: A video benchmark for large-scale person re- identification,” in ECCV, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [67] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Wu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Lin, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Dong, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Yan, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Ouyang, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Yang, “Ex- ploit the unknown gradually: One-shot video-based person re- identification by stepwise learning,” CVPR, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [68] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' He, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Zhang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Ren, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Sun, “Deep residual learning for image recognition,” in CVPR, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [69] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Paszke, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Gross, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Massa, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Lerer, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Bradbury, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Chanan, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Killeen, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Lin, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Gimelshein, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Antiga, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Desmaison, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' K¨opf, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Yang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' DeVito, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Raison, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Tejani, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Chilamkurthy, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Steiner, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Fang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Bai, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Chintala, “Pytorch: An imperative style, high-performance deep learning library,” in NeurIPS, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [70] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Yang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Zhong, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Luo, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Cai, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Li, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Nicu, “Joint noise- tolerant learning and meta camera shift adaptation for unsuper- vised person re-identification,” in CVPR, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [71] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Strehl and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Ghosh, “Cluster ensembles — a knowledge reuse framework for combining multiple partitions,” JMLR, 2002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [72] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Ye, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Lan, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Yuen, “Robust anchor embedding for unsupervised video person re-identification in the wild,” in ECCV, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [73] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Chen, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Zhu, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Gong, “Deep association learning for unsupervised video person re-identification,” in BMVC, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [74] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Li, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Zhu, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Gong, “Unsupervised person re- identification by deep learning tracklet association,” in ECCV, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [75] ——, “Unsupervised tracklet person re-identification,” IEEE trans- actions on pattern analysis and machine intelligence, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [76] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Xiao, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Li, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Wang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Lin, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Wang, “Joint detection and identification feature learning for person search,” CVPR, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' [77] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Zhang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Pepose, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Joo, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Ramanan, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Malik, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Kanazawa, “Perceiving 3d human-object spatial arrangements from a single image in the wild,” in ECCV, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Hao Chen received the B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' degree from Wuhan University in 2014, and the M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' degree from CentraleSup´elec and Universit´e Paris Saclay in 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' He is currently working towards his Ph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' at Inria Sophia Antipolis and Universit´e Cˆote d’Azur.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' His research interests include person re- identification and unsupervised learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Home- page: https://chenhao2345.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='io/.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Yaohui Wang received the B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' degree from Xidian University in 2015, and the M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' de- gree from ENSIIE and Universit´e Paris Saclay in 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' He is currently working towards his Ph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' at Inria Sophia Antipolis, STARS Team and Universit´e Cˆote d’Azur.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' His current research focuses on image and video synthesis, activity recognition and representation learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Benoit Lagadec is a Research Engineer at Eu- ropean Systems Integration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' He currently works on developing video analysis solutions based on abnormal human behavior.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Previously, he worked in public research at Ifremer, where he was able to develop image processing algo- rithms adapted to the difficulty of underwater imaging : denoising, segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' 二15 Antitza Dantcheva is a Research Scientist (CRCN) with the STARS team of INRIA Sophia Antipolis, France.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Previously, she was a Marie Curie fellow at Inria and a Postdoctoral Fellow at the Michigan State University and the West Virginia University, USA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' She received her Ph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' degree from T´el´ecom ParisTech/Eurecom in im- age processing and biometrics in 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Her re- search is in computer vision and specifically in designing algorithms that seek to learn suitable representations of the human face in interpreta- tion and generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' Francois Bremond received the PhD degree from INRIA in video understanding in 1997, and he pursued his research work as a post doc- torate at the University of Southern California (USC) on the interpretation of videos taken from Unmanned Airborne Vehicle (UAV).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' In 2007, he received the HDR degree (Habilitation a Diriger des Recherches) from Nice University on Scene Understanding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' He created the STARS team on the 1st of January 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' He is the research director at INRIA Sophia Antipolis, France.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' He has conducted research work in video understanding since 1993 at Sophia- Antipolis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' He is author or co-author of more than 140 scien- tific papers published in international journals or conferences in video understanding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' He is a handling editor for MVA and a reviewer for several international journals (CVIU, IJPRAI, IJHCS, PAMI, AIJ, Eurasip, JASP) and conferences (CVPR, ICCV, AVSS, VS, ICVS).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' He has (co- )supervised 26 PhD theses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
+page_content=' He is an EC INFSO and French ANR Expert for reviewing projects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'}
diff --git a/zdAzT4oBgHgl3EQfC_pK/content/tmp_files/2301.00968v1.pdf.txt b/zdAzT4oBgHgl3EQfC_pK/content/tmp_files/2301.00968v1.pdf.txt
new file mode 100644
index 0000000000000000000000000000000000000000..55bd4e169935e97241cc6dda5aaee39935916d6b
--- /dev/null
+++ b/zdAzT4oBgHgl3EQfC_pK/content/tmp_files/2301.00968v1.pdf.txt
@@ -0,0 +1,451 @@
+Interpretation and Analysis of the Steady-State Neural Response to
+Complex Sequential Structures: a Methodological Note
+
+Nai Ding
+College of Biomedical Engineering and Instrument Science,
+Zhejiang University, Hangzhou, China
+
+
+Abstract
+Frequency tagging is a powerful approach to investigate the neural processing of
+sensory features, and is recently adapted to study the neural correlates of
+superordinate structures, i.e., chunks, in complex sequences such as speech and
+music. The nesting of sequence structures, the necessity to control the periodicity in
+sensory features, and the low-frequency nature of sequence structures pose new
+challenges for data analysis and interpretation. Here, I discuss how to interpret the
+frequency of a sequential structure, and factors that need to be considered when
+analyzing the periodicity in a signal. Finally, a safe procedure is recommended for the
+analysis of frequency-tagged responses.
+
+
+
+
+1. Introduction
+Frequency tagging is a power technique to extract the neural response tracking a
+stimulus feature. In general, in the frequency tagging paradigm, a target stimulus
+feature is periodically modulated at a frequency f. Consequently, the neural response
+that dynamically tracks the stimulus feature also fluctuates at frequency f. The f-Hz
+frequency tagged response is often extracted using the Discrete Fourier Transform
+(DFT) or wavelet transform. Frequency-tagging is a powerful paradigm for
+electroencephalography (EEG) and magnetoencephalography (MEG) studies since it
+can extract any neural response that follows the f-Hz change in the stimulus,
+regardless of the latency or waveform of the response. The paradigm has been widely
+applied to study visual (Norcia et al., 2015; Regan, 1977) and auditory (Galambos et
+al., 1981; Picton et al., 2003) processing: The frequency-tagged response to periodic
+changes in visual features, e.g., luminance, is referred to as the Steady State Visual
+Evoked Potentials (SSVEP), while the frequency-tagged response to periodic changes
+in auditory features, e.g., intensity, is referred to as the auditory Steady State
+Response (aSSR). These responses are widely applied to study the basic properties of
+sensory encoding (Herrmann, 2001; Ross et al., 2000; Wang et al., 2012; Wong et al.,
+2007) and cognitive control (Andersen et al., 2008; Elhilali et al., 2009; Gao et al.,
+2021).
+
+More recently, the frequency-tagging paradigm has been applied to study the neural
+processing of superordinate structures in complex sequences, e.g., speech and music:
+The hypothesis in these studies is that a mentally constructed superordinate sequence
+structure, i.e., a sentence, is neurally represented by a response whose duration
+matches the duration of the structure in the stimulus (Buiatti et al., 2009; Ding et al.,
+2016; Nozaradan et al., 2011). On the one hand, frequency tagging provides a
+
+powerful paradigm to investigate the neural processing of a chunk in contrast to a
+brief stimulus event and has stimulates a large number of studies (Batterink & Paller,
+2019; Benitez-Burraco & Murphy, 2019; Choi et al., 2020; Glushko et al., 2022;
+Henin et al., 2021; Kaufeld et al., 2020; Kazanina & Tavano, 2022; Keitel et al., 2018;
+Lo et al., 2022; Lu et al., 2021; Makov et al., 2017; Meng et al., 2021; Meyer, 2018).
+On the other hand, the complexity of the sequence processing problem has also
+caused more challenges to the analysis and interpretation of the frequency-tagged
+responses. First, in traditional frequency tagging studies, each stimulus feature of
+interest is tagged at a distinct frequency, while the structures in a complex sequence
+are often nested so that different levels of structures cannot be tagged at unrelated
+arbitrary frequencies. For example, in a sentence “the cute boy smiled”, the first three
+words construct a noun phrase based on syntax. Nevertheless, the 3-word noun phrase
+and the 4-word sentence are nested so that they cannot be frequency tagged at
+unrelated frequencies. The nesting between structures lead to a dissociation between
+structure duration and structure repetition period, which is discussed in Section 2.1.
+
+Second, traditional frequency tagging studies explicitly create periodic changes in a
+stimulus feature while the studies on sequence structures sometimes want to avoid
+such periodic changes in basic stimulus features to isolate the neural response
+generated by internal mental processes. What is a neural response generated by
+internal mental processes? For example, a metrical structure may be imagined when
+listening to an isochronous beat sequence, and the neural response at the imagined
+meter rate can reflect internally driven processes (Nozaradan et al., 2011). Similarly,
+when a sequence of words is grouped into sentences based on syntactic rules, the
+neural response at the sentence rate can reflect higher-level sentence processing (Ding
+et al., 2016). In these situations, however, if a basic sensory feature has the same
+
+periodicity as the imagined meter or syntactically constructed sentence, it is
+ambiguous whether the neural response tracks the sensory feature or the sequence
+structures. Therefore, it is often necessary to check the periodicity in stimulus
+features. Cautions, however, are needed since some types of periodicities are not
+captured by the Fourier transform, which is discussed in Section 2.2.
+
+Third, the analysis of responses to frequency-tagged sequence structures is sometimes
+prone to artifacts that seldomly affect the analysis of traditional frequency tagged
+responses. For sequence structures often correspond to a very low frequency, e.g., < 3
+Hz, and such a low-frequency may be contaminated by overlapping in the analysis
+epochs (Benjamin et al., 2021). Section 3 illustrates why such artifacts may be
+generated and discusses potential guidelines for appropriate analysis of the frequency
+tagged responses, including the selection of analysis duration and whether a
+smoothing window should be used. This article discusses common technical issues,
+instead of the analysis of a specific experiment. However, to facilitate interpretation, a
+hypothetical experiment is provided in Fig. 1A, but the conclusions are not limited to
+this example. On the other hand, the target audience is experimentalists instead of
+engineers. Therefore, article attempts to explain ideas using illustrations and skips
+mathematical derivations. The mathematical basis of the DFT can be found in classic
+textbooks such as Oppenheim et al. (2001).
+
+2. What is not reflected by the Fourier transform
+2.1 Frequency may not reflect the time constant or signal duration
+For frequency-domain analysis, a central concept is frequency, which corresponds to
+the period of a signal. The period of a signal, however, does not necessarily coincide
+with other time constants of a signal. For example, an exponential signal e-t/τ has a
+
+time constant τ, but the signal is aperiodic and τ is not a period of the signal. Even for
+a periodic signal, its period may dissociate from the time constant or duration of the
+waveform within a period, and some examples are shown in Fig. 1B. In these
+examples, the signals have a period of 1 s, and the waveform within a period is shown
+in the left panel. The temporal parameters of the signal, including the time constant of
+an exponential function, duration of a sawtooth signal, and frequency of a single-cycle
+sinusoid, affect the shape of the Fourier spectrum but generally do not lead to any
+spectral peak corresponding to these parameters. Instead, since the signal repeats at a
+rate of 1 Hz, the spectrum shows peaks at 1 Hz and its harmonically related
+frequencies, i.e., 2 Hz, 3 Hz, etc. When the period of the signal changes, however, the
+spectral peaks shift accordingly, even if the waveform within a cycle remains
+unchanged (Fig. 1C). See Zhou et al. (Zhou et al., 2016) for more illustrations about
+how the spectrum is influenced by the signal repetition rate and the waveform within
+a period.
+
+
+
+
+Figure 1. Peaks in the spectrum reflects the periodicity of a signal. A) A hypothetical
+experiment condition, in which a noun phrase (NP) is embedded in a sentence (S).
+The duration of the NP is either 0.75 s or 0.5 s, and a neural response is hypothesized
+to be modulated the duration of the NP. B) Signals that repeat every 1 s and the
+corresponding spectra. The left panel shows the waveform within a period, and the
+black and blue curves have different time constants, i.e., 0.75 s and 0.5 s respectively.
+The right panel shows the spectrum that is the magnitude of the DFT transform of 10
+periods of the corresponding signal. The time constant and the corresponding
+frequency are shown by the vertical dotted lines. The spectrum has peaks at 1 Hz, 1
+over the signal period, and harmonically related frequencies, regardless of the time
+constant of the signal within a period. C) Signals that are constructed by the same
+sawtooth waveform but have different repetition rates. The spectral peaks always
+reflect the repetition rate.
+
+NP(0.5 s)
+S (1 s)
+S(1 s)
+the
+cute
+boy
+smiled
+my
+friend
+likes
+tea
+0.5
+0.25
+0.5
+0.75
+0.25
+0.25
+0.25
+0.75
+0////1.2 Frequency may not reflect the rate of change
+Suppose a signal changes every T s. Intuitively, its Fourier spectrum should peak at
+1/T Hz. This intuition, however, is not always true and an example is given in Fig. 2,
+in which the spectrum shows troughs at 1/T Hz and harmonically related frequencies.
+When the signal is employed to modulate the gain of a 4-Hz sinusoid, the modulated
+sinusoid does not show any power at 1/T Hz either. The purpose of these examples is
+to show that the Fourier transform may be blind to some rhythms. Why does the
+signal lack power at 1/T Hz? In the Fourier transform, the power at f is determined by
+the dot product between the signal and sinusoids at frequency f (including both sine
+and cosine). The signals in Fig. 2 contain no fluctuations within each T s and therefore
+the signal has no correlation with sinusoids at 1/T Hz. Figure 3 illustrates the dot
+product between signals.
+
+
+
+Figure 2. The change rate of a signal can correspond to troughs in the spectrum. The
+upper panel shows a signal that changes once every 1 s, and the lower panel is a 4-Hz
+sinusoid that is amplitude modulated by the signal on the upper panel. In the
+spectrum, troughs are observed at 1 Hz and harmonically related frequencies.
+
+
+
+
+
+Figure 3. Illustration of the dot product between signals, which is the basis of the
+DFT. A) A 3-Hz sinusoid, which is employed to calculated the DFT coefficient at 3
+Hz. BC) Signals to analyze and their point-by-point product with the reference signal.
+The top signal is similar to the signal in Fig. 2, while the other 3 signals are sinusoids
+with the frequency shown by the number in panel B. The sum of the product signal,
+i.e., the dot product between the two signals, is shown by the number in red in panel
+C. DE) Examples of signals that have nonzero dot product with the reference signal.
+
+
+2. Effects of the neural response analysis method
+2.1 Overlapping epochs can introduce artifacts
+A rhythm can be created based on an arbitrary signal by adding delayed versions of
+the signal to itself. An illustration is shown in Fig. 4A, in which the signal to analyze
+only consists of a pulse at 4.8 s and is 0 otherwise. When the signal is chunked into 5-
+s epochs with 4-s overlap, however, the averaged epoch clearly becomes periodic and
+the period is the same as the distance between adjacent epochs, e.g., 1 s. Another
+
+0.2
+0.4
+0.6
+0.8
+0.0
+0.0
+0.0
+0.0
+0.5
+0.5
+1.9
+3.7
+6.3
+3.7
+0.5
+0.5example is shown in Fig. 4B, in which a white noise is chunked into 5 s epochs in the
+same way. The spectrum averaged over 100 epochs clearly shows a peak at 1 Hz. In
+fact, in hearing research, this method has been employed to generate pitch perception
+based on, e.g., white noise (Yost, 1996).
+
+If inappropriate data epoching can introduce artifacts, why not directly applying the
+Fourier transform to the unepoched data? A direct Fourier transform to the unepoched
+data can indeed yield a high-frequency-resolution spectrum of the response.
+Nevertheless, in real EEG/MEG recordings, strong artifacts caused by, e.g., head
+movements or hardware glitches, can barely be avoided during a long recording, and
+excluding recordings with large artifacts from further analyses is a common practice
+in EEG/MEG analysis. It is nonoptimal, however, to throw away a long recording
+based on a few sparsely located artifacts. Therefore, segmenting a long recording into
+shorter epochs and only removing epochs with obvious artifacts is a common strategy
+
+2.2 Analysis window determines the width of spectral peaks
+Suppose a frequency-tagged neural response has a period of T s, and D seconds of
+recording is transformed into the frequency domain using the DFT. The DFT
+spectrum consists of coefficients corresponding to discrete frequencies, i.e., 1/D Hz,
+2/D Hz, 3/D Hz, etc. If D is a multiple of T, the frequency-tagged response is resolved
+in the spectrum. In other words, if D = kT, where k is an integer, the kth DFT
+coefficient corresponds to 1/T Hz, i.e., the target frequency. In this case, the response
+spectrum only has power at 1/T Hz and harmonically related frequencies. An example
+is shown in Fig. 5A (upper panel), where T is 0.5 s, D is 5 s, and the neural response is
+exactly a sinusoid. The response spectrum has a sharp peak at 4 Hz and the power in
+
+adjacent frequency bins is 0. The DFT coefficient not at 4 Hz is zero since the dot
+product between any two D-s long sinusoids at frequencies resolved by the DFT is
+zero (Fig. 3BC). When D is not a multiple of T, however, the DFT spectrum does not
+have a frequency bin corresponding to 1/T Hz and the power of the signal spreads to
+many frequency bins near 1/T Hz, a phenomenon known as frequency leakage. An
+example is shown Fig. 5B (upper panel), where T is still 0.5 s but D is 5.1 s.
+.
+
+Figure 4. Overlapping epochs can lead to spurious peaks in the spectrum. A) A
+nonperiodic signal that is composed of a single pulse. B) The signal in A is segmented
+into 5-s epochs that have 4-s overlap with each other. C) The average of the epochs in
+B. D) The same epoching process is applied to white noise and the resulting
+waveform is shown. E) The spectrum of the signal in D. To obtain a robust result,
+twenty independent white noise is generated and processed the same way and the
+spectra are averaged.
+
+
+
+1.5
+0.4
+0.2
+0.5
+1.5
+3.5A common strategy to alleviate frequency leakage is to multiply a smoothing window
+to the signal before the Fourier transform. The spectra of the windowed signals are
+shown in Fig. 5 (lower panel). With the smoothing window, the signal duration no
+longer strongly affects the shape of the spectrum, but the spectrum always has
+nonzero power in frequency bins near the target frequency, i.e., 2 Hz. The main
+difference between the methods in Fig. 5 is whether all the power of a sinusoid
+concentrates in a single frequency bin or spreads to several bins. It is not further
+illustrated but the conclusions apply to other variations in the analysis method, such as
+padding zeros to the signal or using the wavelet transform instead Fourier transform.
+
+Shall we care about whether the signal power concentrates in a single frequency bin
+or not? The answer is yes in conditions. For example, a convenient approach to test
+the statistical significance of a frequency tagged response is to compare the power at
+the target frequency with the power in adjacent frequency bins (Benjamin et al., 2021;
+Ding et al., 2016; Nozaradan et al., 2011). The statistical power of this approach is
+clearly compromised when the power in adjacent frequency bins is elevated from
+baseline. Even when the statistical significance of the frequency-tagged response is
+tested using other methods, e.g., in comparison with a control condition that does not
+have the frequency-tagged response (Andersen et al., 2008), the statistical power of
+the test can benefit from concentrating all power of the frequency-tagged response
+into a single frequency.
+
+More generally, when the periodicity of a signal is unknown and needs to be
+determined using the Fourier analysis, a smoothing window often helps. Nevertheless,
+in the frequency-tagging paradigm, the target frequency is known and therefore a
+
+smoothing window is not necessary. In other words, in the frequency-tagging
+approach, the purpose of data analysis is not to estimate the periodicity of a response
+but to detect the presence of a response with a known frequency. Based on the signal
+detection theory (Poor, 1998), optimal detection of a sinusoid generally involves
+calculating the dot product between the recorded signal and the target signal, which
+can be viewed a sinusoid in the frequency tagging paradigm, and such dot product can
+be conveniently calculated using the DFT.
+
+
+Figure 5. Frequency leakage and windowing. The signal to analyze is a 2-Hz sinusoid
+and the duration of the signal is 5 s in panel A and 5.1 s in panel B. The upper panel is
+the DFT of the signal and the lower panel is the DFT of the signal smoothed by a
+Hanning window.
+
+3. Summary
+First, in the frequency tagging paradigm, the target frequency is the frequency at
+which a stimulus feature or sequence structure repeats, which in general does not
+relate to how long the feature or structure lasts or how fast it varies within each
+period. Second, the Fourier transform does not provide a one-size-fits-all solution to
+extract all periodicities in a signal. On the stimulus side, cautions are needed, e.g.,
+when making sure that a stimulus does not contain any conceivable periodicity at a
+target frequency. On the response side, more advanced feature extraction methods
+may be necessary to identify a frequency-tagged response. For example, for the
+
+0.5
+0.5
+2.5signals in Fig. 2, taking the absolute value of the first-order derivative of the signal
+can reflect the 1-Hz periodicity in the signal.
+
+Finally, I recommend the following as a relatively safe procedure to analyze
+frequency-tagged responses.
+(1) The response being analyzed should contain exactly an integer number of periods
+of the frequency-tagged response. More specifically, if the response sampling rate is F
+and the response is frequency tagged at f, the number of samples per cycle of the
+response is F/f, which does not need to be an integer. Nevertheless, if k cycles are
+included in the analysis window, the total length of the analysis window, i.e., kF/f,
+should be an integer and k is also an integer.
+(2) When the stimulus lasts for a very long duration (e.g., several minutes), and the
+response recorded throughout the presentation of the stimulus can be directly
+transform into the frequency domain. Alternatively, it can be segmented into shorter
+epochs, e.g., to remove epochs with large artifacts, and averaged. The epochs,
+however, should not overlap.
+(3) No smoothing window is necessary when performing the Fourier analysis, when
+the target frequency is known and the analysis window contains an integer number of
+cycles of the target response.
+
+
+
+
+Acknowledgement
+I thank Wenhui Sun for helping formatting the bibliography. This work was supported
+by the National Natural Science Foundation of China (32222035) and Key R & D
+Program of Zhejiang (2022C03011).
+
+Reference
+Andersen, S. K., Hillyard, S. A., & Muller, M. M. (2008). Attention facilitates
+multiple stimulus features in parallel in human visual cortex. Curr Biol,
+18(13), 1006-1009. https://doi.org/10.1016/j.cub.2008.06.030
+Batterink, L. J., & Paller, K. A. (2019). Statistical learning of speech regularities can
+occur outside the focus of attention. Cortex, 115, 56-71.
+https://doi.org/10.1016/j.cortex.2019.01.013
+Benitez-Burraco, A., & Murphy, E. (2019). Why Brain Oscillations Are Improving
+Our Understanding of Language. Front Behav Neurosci, 13, 190.
+https://doi.org/10.3389/fnbeh.2019.00190
+Benjamin, L., Dehaene-Lambertz, G., & Fló, A. (2021). Remarks on the analysis of
+steady-state responses: Spurious artifacts introduced by overlapping epochs.
+Cortex, 142. https://doi.org/10.1016/j.cortex.2021.05.023
+Buiatti, M., Peña, M., & Dehaene-Lambertz, G. (2009). Investigating the neural
+correlates of continuous speech computation with frequency-tagged
+neuroelectric responses. Neuroimage, 44(2), 509-519.
+https://doi.org/https://doi.org/10.1016/j.neuroimage.2008.09.015
+Choi, D., Batterink, L. J., Black, A. K., Paller, K. A., & Werker, J. F. (2020). Preverbal
+Infants Discover Statistical Word Patterns at Similar Rates as Adults: Evidence
+From Neural Entrainment. Psychological Science, 31(9), 1161-1173.
+https://doi.org/10.1177/0956797620933237
+Ding, N., Melloni, L., Zhang, H., Tian, X., & Poeppel, D. (2016). Cortical tracking of
+hierarchical linguistic structures in connected speech. Nat Neurosci, 19(1),
+158-164. https://doi.org/10.1038/nn.4186
+Elhilali, M., Xiang, J., Shamma, S. A., & Simon, J. Z. (2009). Interaction between
+
+attention and bottom-up saliency mediates the representation of foreground
+and background in an auditory scene. PLoS Biol, 7(6), e1000129.
+https://doi.org/10.1371/journal.pbio.1000129
+Galambos, R., Makeig, S., & Talmachoff, P. J. (1981). A 40-Hz auditory potential
+recorded from the human scalp. Proceedings of the National Academy of
+Sciences, 78(4), 2643-2647. https://doi.org/10.1073/pnas.78.4.2643
+Gao, X., Wang, Y., Chen, X., & Gao, S. (2021). Interface, interaction, and intelligence
+in generalized brain–computer interfaces. Trends in Cognitive
+Sciences, 25(8), 671-684. https://doi.org/10.1016/j.tics.2021.04.003
+Glushko, A., Poeppel, D., & Steinhauer, K. (2022). Overt and implicit prosody
+contribute to neurophysiological responses previously attributed to
+grammatical processing. Scientific Reports, 12(1), 14759.
+https://doi.org/10.1038/s41598-022-18162-3
+Henin, S., Turk-Browne, N. B., Friedman, D., Liu, A., Dugan, P., Flinker, A., Doyle,
+W., Devinsky, O., & Melloni, L. (2021). Learning hierarchical sequence
+representations across human cortex and hippocampus. Science Advances,
+7(8), eabc4530. https://doi.org/10.1126/sciadv.abc4530
+Herrmann, C. S. (2001). Human EEG responses to 1-100 Hz flicker: resonance
+phenomena in visual cortex and their potential correlation to cognitive
+phenomena. Exp Brain Res, 137(3-4), 346-353.
+https://doi.org/10.1007/s002210100682
+Kaufeld, G., Bosker, H. R., Ten Oever, S., Alday, P. M., Meyer, A. S., & Martin, A. E.
+(2020). Linguistic Structure and Meaning Organize Neural Oscillations into a
+Content-Specific Hierarchy. J Neurosci, 40(49), 9467-9475.
+https://doi.org/10.1523/JNEUROSCI.0302-20.2020
+Kazanina, N., & Tavano, A. (2022). What neural oscillations can and cannot do for
+syntactic structure building. Nature Reviews Neuroscience.
+https://doi.org/10.1038/s41583-022-00659-5
+Keitel, A., Gross, J., & Kayser, C. (2018). Perceptually relevant speech tracking in
+auditory and motor cortex reflects distinct linguistic features. PLoS Biol,
+
+16(3), e2004473. https://doi.org/10.1371/journal.pbio.2004473
+Lo, C.-W., Tung, T.-Y., Ke, A. H., & Brennan, J. R. (2022). Hierarchy, Not Lexical
+Regularity, Modulates Low-Frequency Neural Synchrony During Language
+Comprehension. Neurobiology of Language, 3(4), 538-555.
+https://doi.org/10.1162/nol_a_00077
+Lu, L., Sheng, J., Liu, Z., & Gao, J. H. (2021). Neural representations of imagined
+speech revealed by frequency-tagged magnetoencephalography responses.
+Neuroimage, 229, 117724. https://doi.org/10.1016/j.neuroimage.2021.117724
+Makov, S., Sharon, O., Ding, N., Ben-Shachar, M., Nir, Y., & Zion Golumbic, E.
+(2017). Sleep Disrupts High-Level Speech Parsing Despite Significant Basic
+Auditory Processing. J Neurosci, 37(32), 7772-7781.
+https://doi.org/10.1523/JNEUROSCI.0168-17.2017
+Meng, Q., Hegner, Y. L., Giblin, I., McMahon, C., & Johnson, B. W. (2021).
+Lateralized Cerebral Processing of Abstract Linguistic Structure in Clear and
+Degraded Speech. Cereb Cortex, 31(1), 591-602.
+https://doi.org/10.1093/cercor/bhaa245
+Meyer, L. (2018). The neural oscillations of speech processing and language
+comprehension: state of the art and emerging mechanisms. Eur J Neurosci,
+48(7), 2609-2621. https://doi.org/10.1111/ejn.13748
+Norcia, A. M., Appelbaum, L. G., Ales, J. M., Cottereau, B. R., & Rossion, B. (2015).
+The steady-state visual evoked potential in vision research: A review. Journal
+of Vision, 15(6), 4-4. https://doi.org/10.1167/15.6.4
+Nozaradan, S., Peretz, I., Missal, M., & Mouraux, A. (2011). Tagging the neuronal
+entrainment to beat and meter. J Neurosci, 31(28), 10234-10240.
+https://doi.org/10.1523/JNEUROSCI.0411-11.2011
+Oppenheim, A. V., Buck, J. R., & Schafer, R. W. (2001). Discrete-time signal
+processing. Vol. 2. Upper Saddle River, NJ: Prentice Hall.
+Picton, T. W., John, M. S., Dimitrijevic, A., & Purcell, D. (2003). Human auditory
+steady-state responses: Respuestas auditivas de estado estable en humanos.
+International Journal of Audiology, 42(4), 177-219.
+
+https://doi.org/10.3109/14992020309101316
+Poor, H. V. (1998). An introduction to signal detection and estimation. Springer
+Science & Business Media.
+Regan, D. (1977). Steady-state evoked potentials. Journal of the Optical Society of
+America, 67(11), 1475-1489. https://doi.org/10.1364/JOSA.67.001475
+Ross, B., Borgmann, C., Draganova, R., Roberts, L., & Pantev, C. (2000). A high-
+precision magnetoencephalographic study of human auditory steady-state
+responses to amplitude-modulated tones. The Journal of the Acoustical Society
+of America, 108, 679-691. https://doi.org/10.1121/1.429600
+Wang, Y., Ding, N., Ahmar, N., Xiang, J., Poeppel, D., & Simon, J. Z. (2012).
+Sensitivity to temporal modulation rate and spectral bandwidth in the human
+auditory system: MEG evidence. J Neurophysiol, 107(8), 2033-2041.
+https://doi.org/10.1152/jn.00310.2011
+Wong, P. C. M., Skoe, E., Russo, N. M., Dees, T., & Kraus, N. (2007). Musical
+experience shapes human brainstem encoding of linguistic pitch patterns.
+Nature Neuroscience, 10(4), 420-422. https://doi.org/10.1038/nn1872
+Yost, W. A. (1996). Pitch of iterated rippled noise. The Journal of the Acoustical
+Society of America, 100(1), 511-518. https://doi.org/10.1121/1.415873
+Zhou, H., Melloni, L., Poeppel, D., & Ding, N. (2016). Interpretations of Frequency
+Domain Analyses of Neural Entrainment: Periodicity, Fundamental Frequency,
+and Harmonics. Front Hum Neurosci, 10, 274.
+https://doi.org/10.3389/fnhum.2016.00274
+
+
diff --git a/zdAzT4oBgHgl3EQfC_pK/content/tmp_files/load_file.txt b/zdAzT4oBgHgl3EQfC_pK/content/tmp_files/load_file.txt
new file mode 100644
index 0000000000000000000000000000000000000000..f6226c0e893e66a00ddfc10b92cab5bb04041446
--- /dev/null
+++ b/zdAzT4oBgHgl3EQfC_pK/content/tmp_files/load_file.txt
@@ -0,0 +1,666 @@
+filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf,len=665
+page_content='Interpretation and Analysis of the Steady-State Neural Response to Complex Sequential Structures: a Methodological Note Nai Ding College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, China Abstract Frequency tagging is a powerful approach to investigate the neural processing of sensory features, and is recently adapted to study the neural correlates of superordinate structures, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', chunks, in complex sequences such as speech and music.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' The nesting of sequence structures, the necessity to control the periodicity in sensory features, and the low-frequency nature of sequence structures pose new challenges for data analysis and interpretation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Here, I discuss how to interpret the frequency of a sequential structure, and factors that need to be considered when analyzing the periodicity in a signal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Finally, a safe procedure is recommended for the analysis of frequency-tagged responses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Introduction Frequency tagging is a power technique to extract the neural response tracking a stimulus feature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' In general, in the frequency tagging paradigm, a target stimulus feature is periodically modulated at a frequency f.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Consequently, the neural response that dynamically tracks the stimulus feature also fluctuates at frequency f.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' The f-Hz frequency tagged response is often extracted using the Discrete Fourier Transform (DFT) or wavelet transform.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Frequency-tagging is a powerful paradigm for electroencephalography (EEG) and magnetoencephalography (MEG) studies since it can extract any neural response that follows the f-Hz change in the stimulus, regardless of the latency or waveform of the response.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' The paradigm has been widely applied to study visual (Norcia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', 2015;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Regan, 1977) and auditory (Galambos et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', 1981;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Picton et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', 2003) processing: The frequency-tagged response to periodic changes in visual features, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', luminance, is referred to as the Steady State Visual Evoked Potentials (SSVEP), while the frequency-tagged response to periodic changes in auditory features, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', intensity, is referred to as the auditory Steady State Response (aSSR).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' These responses are widely applied to study the basic properties of sensory encoding (Herrmann, 2001;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Ross et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', 2000;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', 2012;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Wong et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', 2007) and cognitive control (Andersen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', 2008;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Elhilali et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', 2009;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Gao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' More recently, the frequency-tagging paradigm has been applied to study the neural processing of superordinate structures in complex sequences, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', speech and music: The hypothesis in these studies is that a mentally constructed superordinate sequence structure, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', a sentence, is neurally represented by a response whose duration matches the duration of the structure in the stimulus (Buiatti et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', 2009;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Ding et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', 2016;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Nozaradan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', 2011).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' On the one hand, frequency tagging provides a powerful paradigm to investigate the neural processing of a chunk in contrast to a brief stimulus event and has stimulates a large number of studies (Batterink & Paller, 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Benitez-Burraco & Murphy, 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Choi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Glushko et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Henin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Kaufeld et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Kazanina & Tavano, 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Keitel et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Lo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Lu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Makov et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Meng et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Meyer, 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' On the other hand, the complexity of the sequence processing problem has also caused more challenges to the analysis and interpretation of the frequency-tagged responses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' First, in traditional frequency tagging studies, each stimulus feature of interest is tagged at a distinct frequency, while the structures in a complex sequence are often nested so that different levels of structures cannot be tagged at unrelated arbitrary frequencies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' For example, in a sentence “the cute boy smiled”, the first three words construct a noun phrase based on syntax.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Nevertheless, the 3-word noun phrase and the 4-word sentence are nested so that they cannot be frequency tagged at unrelated frequencies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' The nesting between structures lead to a dissociation between structure duration and structure repetition period, which is discussed in Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Second, traditional frequency tagging studies explicitly create periodic changes in a stimulus feature while the studies on sequence structures sometimes want to avoid such periodic changes in basic stimulus features to isolate the neural response generated by internal mental processes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' What is a neural response generated by internal mental processes?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' For example, a metrical structure may be imagined when listening to an isochronous beat sequence, and the neural response at the imagined meter rate can reflect internally driven processes (Nozaradan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', 2011).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Similarly, when a sequence of words is grouped into sentences based on syntactic rules, the neural response at the sentence rate can reflect higher-level sentence processing (Ding et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', 2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' In these situations, however, if a basic sensory feature has the same periodicity as the imagined meter or syntactically constructed sentence, it is ambiguous whether the neural response tracks the sensory feature or the sequence structures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Therefore, it is often necessary to check the periodicity in stimulus features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Cautions, however, are needed since some types of periodicities are not captured by the Fourier transform, which is discussed in Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Third, the analysis of responses to frequency-tagged sequence structures is sometimes prone to artifacts that seldomly affect the analysis of traditional frequency tagged responses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' For sequence structures often correspond to a very low frequency, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', < 3 Hz, and such a low-frequency may be contaminated by overlapping in the analysis epochs (Benjamin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Section 3 illustrates why such artifacts may be generated and discusses potential guidelines for appropriate analysis of the frequency tagged responses, including the selection of analysis duration and whether a smoothing window should be used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' This article discusses common technical issues, instead of the analysis of a specific experiment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' However, to facilitate interpretation, a hypothetical experiment is provided in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' 1A, but the conclusions are not limited to this example.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' On the other hand, the target audience is experimentalists instead of engineers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Therefore, article attempts to explain ideas using illustrations and skips mathematical derivations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' The mathematical basis of the DFT can be found in classic textbooks such as Oppenheim et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' (2001).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' What is not reflected by the Fourier transform 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='1 Frequency may not reflect the time constant or signal duration For frequency-domain analysis, a central concept is frequency, which corresponds to the period of a signal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' The period of a signal, however, does not necessarily coincide with other time constants of a signal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' For example, an exponential signal e-t/τ has a time constant τ, but the signal is aperiodic and τ is not a period of the signal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Even for a periodic signal, its period may dissociate from the time constant or duration of the waveform within a period, and some examples are shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' 1B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' In these examples, the signals have a period of 1 s, and the waveform within a period is shown in the left panel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' The temporal parameters of the signal, including the time constant of an exponential function, duration of a sawtooth signal, and frequency of a single-cycle sinusoid, affect the shape of the Fourier spectrum but generally do not lead to any spectral peak corresponding to these parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Instead, since the signal repeats at a rate of 1 Hz, the spectrum shows peaks at 1 Hz and its harmonically related frequencies, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', 2 Hz, 3 Hz, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' When the period of the signal changes, however, the spectral peaks shift accordingly, even if the waveform within a cycle remains unchanged (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' 1C).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' See Zhou et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' (Zhou et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', 2016) for more illustrations about how the spectrum is influenced by the signal repetition rate and the waveform within a period.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Peaks in the spectrum reflects the periodicity of a signal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' A) A hypothetical experiment condition, in which a noun phrase (NP) is embedded in a sentence (S).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' The duration of the NP is either 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='75 s or 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='5 s, and a neural response is hypothesized to be modulated the duration of the NP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' B) Signals that repeat every 1 s and the corresponding spectra.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' The left panel shows the waveform within a period, and the black and blue curves have different time constants, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='75 s and 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='5 s respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' The right panel shows the spectrum that is the magnitude of the DFT transform of 10 periods of the corresponding signal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' The time constant and the corresponding frequency are shown by the vertical dotted lines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' The spectrum has peaks at 1 Hz, 1 over the signal period, and harmonically related frequencies, regardless of the time constant of the signal within a period.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' C) Signals that are constructed by the same sawtooth waveform but have different repetition rates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' The spectral peaks always reflect the repetition rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' NP(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='5 s) S (1 s) S(1 s) the cute boy smiled my friend likes tea 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='75 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='75 0////1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='2 Frequency may not reflect the rate of change Suppose a signal changes every T s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Intuitively, its Fourier spectrum should peak at 1/T Hz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' This intuition, however, is not always true and an example is given in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' 2, in which the spectrum shows troughs at 1/T Hz and harmonically related frequencies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' When the signal is employed to modulate the gain of a 4-Hz sinusoid, the modulated sinusoid does not show any power at 1/T Hz either.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' The purpose of these examples is to show that the Fourier transform may be blind to some rhythms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Why does the signal lack power at 1/T Hz?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' In the Fourier transform, the power at f is determined by the dot product between the signal and sinusoids at frequency f (including both sine and cosine).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' The signals in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' 2 contain no fluctuations within each T s and therefore the signal has no correlation with sinusoids at 1/T Hz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Figure 3 illustrates the dot product between signals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' The change rate of a signal can correspond to troughs in the spectrum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' The upper panel shows a signal that changes once every 1 s, and the lower panel is a 4-Hz sinusoid that is amplitude modulated by the signal on the upper panel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' In the spectrum, troughs are observed at 1 Hz and harmonically related frequencies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Illustration of the dot product between signals, which is the basis of the DFT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' A) A 3-Hz sinusoid, which is employed to calculated the DFT coefficient at 3 Hz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' BC) Signals to analyze and their point-by-point product with the reference signal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' The top signal is similar to the signal in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' 2, while the other 3 signals are sinusoids with the frequency shown by the number in panel B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' The sum of the product signal, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', the dot product between the two signals, is shown by the number in red in panel C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' DE) Examples of signals that have nonzero dot product with the reference signal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Effects of the neural response analysis method 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='1 Overlapping epochs can introduce artifacts A rhythm can be created based on an arbitrary signal by adding delayed versions of the signal to itself.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' An illustration is shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' 4A, in which the signal to analyze only consists of a pulse at 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='8 s and is 0 otherwise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' When the signal is chunked into 5- s epochs with 4-s overlap, however, the averaged epoch clearly becomes periodic and the period is the same as the distance between adjacent epochs, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', 1 s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Another 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='9 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='7 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='3 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='5example is shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' 4B, in which a white noise is chunked into 5 s epochs in the same way.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' The spectrum averaged over 100 epochs clearly shows a peak at 1 Hz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' In fact, in hearing research, this method has been employed to generate pitch perception based on, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', white noise (Yost, 1996).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' If inappropriate data epoching can introduce artifacts, why not directly applying the Fourier transform to the unepoched data?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' A direct Fourier transform to the unepoched data can indeed yield a high-frequency-resolution spectrum of the response.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Nevertheless, in real EEG/MEG recordings, strong artifacts caused by, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', head movements or hardware glitches, can barely be avoided during a long recording, and excluding recordings with large artifacts from further analyses is a common practice in EEG/MEG analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' It is nonoptimal, however, to throw away a long recording based on a few sparsely located artifacts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Therefore, segmenting a long recording into shorter epochs and only removing epochs with obvious artifacts is a common strategy 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='2 Analysis window determines the width of spectral peaks Suppose a frequency-tagged neural response has a period of T s, and D seconds of recording is transformed into the frequency domain using the DFT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' The DFT spectrum consists of coefficients corresponding to discrete frequencies, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', 1/D Hz, 2/D Hz, 3/D Hz, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' If D is a multiple of T, the frequency-tagged response is resolved in the spectrum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' In other words, if D = kT, where k is an integer, the kth DFT coefficient corresponds to 1/T Hz, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', the target frequency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' In this case, the response spectrum only has power at 1/T Hz and harmonically related frequencies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' An example is shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' 5A (upper panel), where T is 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='5 s, D is 5 s, and the neural response is exactly a sinusoid.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' The response spectrum has a sharp peak at 4 Hz and the power in adjacent frequency bins is 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' The DFT coefficient not at 4 Hz is zero since the dot product between any two D-s long sinusoids at frequencies resolved by the DFT is zero (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' 3BC).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' When D is not a multiple of T, however, the DFT spectrum does not have a frequency bin corresponding to 1/T Hz and the power of the signal spreads to many frequency bins near 1/T Hz, a phenomenon known as frequency leakage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' An example is shown Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' 5B (upper panel), where T is still 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='5 s but D is 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='1 s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Overlapping epochs can lead to spurious peaks in the spectrum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' A) A nonperiodic signal that is composed of a single pulse.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' B) The signal in A is segmented into 5-s epochs that have 4-s overlap with each other.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' C) The average of the epochs in B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' D) The same epoching process is applied to white noise and the resulting waveform is shown.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' E) The spectrum of the signal in D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' To obtain a robust result, twenty independent white noise is generated and processed the same way and the spectra are averaged.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='5A common strategy to alleviate frequency leakage is to multiply a smoothing window to the signal before the Fourier transform.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' The spectra of the windowed signals are shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' 5 (lower panel).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' With the smoothing window, the signal duration no longer strongly affects the shape of the spectrum, but the spectrum always has nonzero power in frequency bins near the target frequency, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', 2 Hz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' The main difference between the methods in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' 5 is whether all the power of a sinusoid concentrates in a single frequency bin or spreads to several bins.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' It is not further illustrated but the conclusions apply to other variations in the analysis method, such as padding zeros to the signal or using the wavelet transform instead Fourier transform.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Shall we care about whether the signal power concentrates in a single frequency bin or not?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' The answer is yes in conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' For example, a convenient approach to test the statistical significance of a frequency tagged response is to compare the power at the target frequency with the power in adjacent frequency bins (Benjamin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Ding et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', 2016;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Nozaradan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', 2011).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' The statistical power of this approach is clearly compromised when the power in adjacent frequency bins is elevated from baseline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Even when the statistical significance of the frequency-tagged response is tested using other methods, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', in comparison with a control condition that does not have the frequency-tagged response (Andersen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', 2008), the statistical power of the test can benefit from concentrating all power of the frequency-tagged response into a single frequency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' More generally, when the periodicity of a signal is unknown and needs to be determined using the Fourier analysis, a smoothing window often helps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Nevertheless, in the frequency-tagging paradigm, the target frequency is known and therefore a smoothing window is not necessary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' In other words, in the frequency-tagging approach, the purpose of data analysis is not to estimate the periodicity of a response but to detect the presence of a response with a known frequency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Based on the signal detection theory (Poor, 1998), optimal detection of a sinusoid generally involves calculating the dot product between the recorded signal and the target signal, which can be viewed a sinusoid in the frequency tagging paradigm, and such dot product can be conveniently calculated using the DFT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Figure 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Frequency leakage and windowing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' The signal to analyze is a 2-Hz sinusoid and the duration of the signal is 5 s in panel A and 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='1 s in panel B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' The upper panel is the DFT of the signal and the lower panel is the DFT of the signal smoothed by a Hanning window.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Summary First, in the frequency tagging paradigm, the target frequency is the frequency at which a stimulus feature or sequence structure repeats, which in general does not relate to how long the feature or structure lasts or how fast it varies within each period.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Second, the Fourier transform does not provide a one-size-fits-all solution to extract all periodicities in a signal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' On the stimulus side, cautions are needed, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', when making sure that a stimulus does not contain any conceivable periodicity at a target frequency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' On the response side, more advanced feature extraction methods may be necessary to identify a frequency-tagged response.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' For example, for the 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='5signals in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' 2, taking the absolute value of the first-order derivative of the signal can reflect the 1-Hz periodicity in the signal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Finally, I recommend the following as a relatively safe procedure to analyze frequency-tagged responses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' (1) The response being analyzed should contain exactly an integer number of periods of the frequency-tagged response.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' More specifically, if the response sampling rate is F and the response is frequency tagged at f, the number of samples per cycle of the response is F/f, which does not need to be an integer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Nevertheless, if k cycles are included in the analysis window, the total length of the analysis window, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', kF/f, should be an integer and k is also an integer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' (2) When the stimulus lasts for a very long duration (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', several minutes), and the response recorded throughout the presentation of the stimulus can be directly transform into the frequency domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Alternatively, it can be segmented into shorter epochs, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', to remove epochs with large artifacts, and averaged.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' The epochs, however, should not overlap.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' (3) No smoothing window is necessary when performing the Fourier analysis, when the target frequency is known and the analysis window contains an integer number of cycles of the target response.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Acknowledgement I thank Wenhui Sun for helping formatting the bibliography.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' This work was supported by the National Natural Science Foundation of China (32222035) and Key R & D Program of Zhejiang (2022C03011).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Reference Andersen, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', Hillyard, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', & Muller, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' (2008).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Attention facilitates multiple stimulus features in parallel in human visual cortex.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Curr Biol, 18(13), 1006-1009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='1016/j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='cub.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='06.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='030 Batterink, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', & Paller, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Statistical learning of speech regularities can occur outside the focus of attention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Cortex, 115, 56-71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='1016/j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='cortex.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='01.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='013 Benitez-Burraco, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', & Murphy, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Why Brain Oscillations Are Improving Our Understanding of Language.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Front Behav Neurosci, 13, 190.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='3389/fnbeh.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='00190 Benjamin, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', Dehaene-Lambertz, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', & Fló, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Remarks on the analysis of steady-state responses: Spurious artifacts introduced by overlapping epochs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Cortex, 142.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='1016/j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='cortex.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='05.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='023 Buiatti, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', Peña, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', & Dehaene-Lambertz, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' (2009).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Investigating the neural correlates of continuous speech computation with frequency-tagged neuroelectric responses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Neuroimage, 44(2), 509-519.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='org/https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='1016/j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='neuroimage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='09.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='015 Choi, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', Batterink, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', Black, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', Paller, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', & Werker, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Preverbal Infants Discover Statistical Word Patterns at Similar Rates as Adults: Evidence From Neural Entrainment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Psychological Science, 31(9), 1161-1173.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='1177/0956797620933237 Ding, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', Melloni, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', Zhang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', Tian, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', & Poeppel, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' (2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Cortical tracking of hierarchical linguistic structures in connected speech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Nat Neurosci, 19(1), 158-164.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='1038/nn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='4186 Elhilali, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', Xiang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', Shamma, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', & Simon, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' (2009).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Interaction between attention and bottom-up saliency mediates the representation of foreground and background in an auditory scene.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' PLoS Biol, 7(6), e1000129.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='1371/journal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='pbio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='1000129 Galambos, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', Makeig, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', & Talmachoff, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' (1981).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' A 40-Hz auditory potential recorded from the human scalp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Proceedings of the National Academy of Sciences, 78(4), 2643-2647.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='1073/pnas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='2643 Gao, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', Wang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', Chen, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', & Gao, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Interface, interaction, and intelligence in generalized brain–' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='computer interfaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Trends in Cognitive Sciences, 25(8), 671-684.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='1016/j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='tics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='04.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='003 Glushko, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', Poeppel, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', & Steinhauer, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Overt and implicit prosody contribute to neurophysiological responses previously attributed to grammatical processing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Scientific Reports, 12(1), 14759.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='1038/s41598-022-18162-3 Henin, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', Turk-Browne, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', Friedman, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', Liu, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', Dugan, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', Flinker, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', Doyle, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', Devinsky, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', & Melloni, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Learning hierarchical sequence representations across human cortex and hippocampus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Science Advances, 7(8), eabc4530.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='1126/sciadv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='abc4530 Herrmann, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' (2001).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Human EEG responses to 1-100 Hz flicker: resonance phenomena in visual cortex and their potential correlation to cognitive phenomena.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Exp Brain Res, 137(3-4), 346-353.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='1007/s002210100682 Kaufeld, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', Bosker, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', Ten Oever, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', Alday, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', Meyer, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', & Martin, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Linguistic Structure and Meaning Organize Neural Oscillations into a Content-Specific Hierarchy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' J Neurosci, 40(49), 9467-9475.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='1523/JNEUROSCI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='0302-20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='2020 Kazanina, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', & Tavano, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' What neural oscillations can and cannot do for syntactic structure building.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Nature Reviews Neuroscience.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='1038/s41583-022-00659-5 Keitel, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', Gross, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', & Kayser, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Perceptually relevant speech tracking in auditory and motor cortex reflects distinct linguistic features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' PLoS Biol, 16(3), e2004473.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='1371/journal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='pbio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='2004473 Lo, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='-W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', Tung, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', Ke, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', & Brennan, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Hierarchy, Not Lexical Regularity, Modulates Low-Frequency Neural Synchrony During Language Comprehension.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Neurobiology of Language, 3(4), 538-555.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='1162/nol_a_00077 Lu, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', Sheng, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', Liu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', & Gao, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Neural representations of imagined speech revealed by frequency-tagged magnetoencephalography responses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Neuroimage, 229, 117724.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='1016/j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='neuroimage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='117724 Makov, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', Sharon, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', Ding, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', Ben-Shachar, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', Nir, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', & Zion Golumbic, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' (2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Sleep Disrupts High-Level Speech Parsing Despite Significant Basic Auditory Processing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' J Neurosci, 37(32), 7772-7781.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='1523/JNEUROSCI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='0168-17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='2017 Meng, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', Hegner, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', Giblin, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', McMahon, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', & Johnson, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Lateralized Cerebral Processing of Abstract Linguistic Structure in Clear and Degraded Speech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Cereb Cortex, 31(1), 591-602.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='1093/cercor/bhaa245 Meyer, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' The neural oscillations of speech processing and language comprehension: state of the art and emerging mechanisms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Eur J Neurosci, 48(7), 2609-2621.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='1111/ejn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='13748 Norcia, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', Appelbaum, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', Ales, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', Cottereau, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', & Rossion, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' (2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' The steady-state visual evoked potential in vision research: A review.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Journal of Vision, 15(6), 4-4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='1167/15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='4 Nozaradan, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', Peretz, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', Missal, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', & Mouraux, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' (2011).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Tagging the neuronal entrainment to beat and meter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' J Neurosci, 31(28), 10234-10240.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='1523/JNEUROSCI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='0411-11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='2011 Oppenheim, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', Buck, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', & Schafer, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' (2001).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Discrete-time signal processing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Upper Saddle River, NJ: Prentice Hall.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Picton, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', John, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', Dimitrijevic, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', & Purcell, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' (2003).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Human auditory steady-state responses: Respuestas auditivas de estado estable en humanos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' International Journal of Audiology, 42(4), 177-219.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='3109/14992020309101316 Poor, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' (1998).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' An introduction to signal detection and estimation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Springer Science & Business Media.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Regan, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' (1977).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Steady-state evoked potentials.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Journal of the Optical Society of America, 67(11), 1475-1489.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='1364/JOSA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='001475 Ross, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', Borgmann, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', Draganova, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', Roberts, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', & Pantev, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' (2000).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' A high- precision magnetoencephalographic study of human auditory steady-state responses to amplitude-modulated tones.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' The Journal of the Acoustical Society of America, 108, 679-691.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='1121/1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='429600 Wang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', Ding, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', Ahmar, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', Xiang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', Poeppel, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', & Simon, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' (2012).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Sensitivity to temporal modulation rate and spectral bandwidth in the human auditory system: MEG evidence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' J Neurophysiol, 107(8), 2033-2041.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='1152/jn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='00310.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='2011 Wong, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', Skoe, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', Russo, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', Dees, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', & Kraus, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' (2007).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Musical experience shapes human brainstem encoding of linguistic pitch patterns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Nature Neuroscience, 10(4), 420-422.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='1038/nn1872 Yost, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' (1996).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Pitch of iterated rippled noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' The Journal of the Acoustical Society of America, 100(1), 511-518.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='1121/1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='415873 Zhou, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', Melloni, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', Poeppel, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=', & Ding, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' (2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Interpretations of Frequency Domain Analyses of Neural Entrainment: Periodicity, Fundamental Frequency, and Harmonics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' Front Hum Neurosci, 10, 274.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='3389/fnhum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}
+page_content='00274' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}